[S1E4] Alan Albert: “We tend to think our first ideas are the best ones, and we run with them. But sometimes they're not the best ideas.”

On confidence, finding the best ideas, and running interviews that dig deep into customers’ “why”

Find out more about Alan at https://www.marketfit.com/

<<Intro – season>>

<<Intro – episode>>

I'm very excited about this episode, because it goes into all the things that I wish we discussed more, both in B2B SaaS and in the startup industry as a whole. From how to get better answers to your research questions to the value of defining the problem you're actually trying to solve first, and specific questions that can help you engineer better solutions to your business problems.

Let's dig in!

<<INTERVIEW>>

Alan Albert

I'm Alan Albert. I have been a serial entrepreneur, co founded three software companies sold one of them to Apple Computer, quite an experience there. And for much of my career, I've bounced around from different roles within software and tech companies, leading product, product design, user interface design, software development, marketing, product marketing, support services, manufacturing of hardware devices. I've been responsible for strategy, for legal, I've been CEO, on the board of companies, chair of the board, investor, advisor, and each time I switched from one role to another, I was a beginner.

And my perspective was that what I was now seeing was so different from what I had previously thought was reality. And that's led me to always question my own perspective, and wonder whether my views are – well, that's what I see. But what are the other views that I'm not seeing? And to value in the perspective of others.

And so through that, that led me to my current role that I've been doing for over a decade now serving as an executive coach, a team coach, product coach, helping companies to grow and to achieve their own objectives by providing the value of perspective and the ability to see, for them to see things from other perspectives, in particular from the perspectives of their customers. And so that's what I do now. And I consider it the best job I've ever had in my life.

Ekaterina (Sam) Howard

Why is that?

Alan Albert

Why is that?

It's because I value learning. And I am currently learning at a faster pace than at any time in my career, I have the benefit of being able to work with a whole lot of different companies in different industries, product companies, services, companies, B2B, B2C, tech, non tech, and to learn from the experiences of my clients to share my perspective, have them try things, whether they're what we talked about, or something completely different. They're all doing their own experiments. And I get to learn from that.

And I enjoy having conversations, such as this one, and such as ones that you and I have had before, about how we each approach things. And then I get to learn from what worked, what didn't work, and be able to share that with the next person I talk with. And my perspectives are continually being rounded out by the viewpoints and learnings from others. And for that I'm really grateful and really happy.

Ekaterina (Sam) Howard

Why do companies that you work with need a coach? Like, shouldn't they be able to figure it all out on their own?

Alan Albert

That's a totally fair question. And it's one I've questioned myself, I’m a questioner too.

Every sports team needs a coach and benefits from the right coach. And I don't think every individual necessarily needs or wants a coach. But just like the best surgeon in the world should never operate on themselves, there are perspectives, ideas, information, questions that are not going to come to us on our own.

And to have a close, vulnerable, intimate connection with someone whose role is to support, to encourage, to inform, to challenge, to question assumptions with an empathetic and supportive heart, bringing experience and ideas that one might not have considered otherwise.

And in particular, this applies to leaders, leaders are expected to lead. They're expected to have the answers, to be able to direct people, and to do so with confidence.

And the challenge is that when we embody that role, we either embody it fully and act as if we know, or we have these periods of doubt – “What if I'm wrong?”

And the last thing a leader should do is to expose those doubts to the employees, to the investors, to the customers, like, “Hey, follow me, I might know where I'm going” is not an encouraging, motivating message, it has to be done with confidence.

And in order to gain that confidence – and in order to amplify that confidence – it really helps to have someone with whom we can be vulnerable to, to have our doubts addressed, our strategies challenged, our ideas questioned and explored.

To look for:

  • Is this the best way we can be doing things?

  • Are there alternatives?

  • How do I know a good strategy when I see one?

  • How do I know the right customer when I see one?

  • How do I know the best new feature to add when I see it?

Everyone has ideas, we tend to think our first ideas are the best ones, and we run with them. But sometimes they're not the best ideas. And to be able to test those in a confidential and safe space can amplify the confidence that that leader can bring into their role when they're out in public and doing and playing that role, and embodying it wholly and fully.

Ekaterina (Sam) Howard

Interesting.

Alan Albert

How so?

Ekaterina (Sam) Howard

Well, two things that come to mind. One is that creating that space, to challenge ideas and assumptions and to be able to not be the confident leader, that sounds like it might be a very important but challenging thing to do.

Alan Albert

You're absolutely right about that. Not everybody walks into a conversation saying, “Hey, I've got all these doubts and worries and concerns. And I don't know if I'm going to succeed, I don’t know if I'm on the right path. Please help.” And yet, everybody can provide help to others, and everybody needs help. We all need help, we need the help of others.

Our confidence not just comes from within ourselves, but it comes from people having confidence in us. And in order to build up that confidence, we do need to be a bit vulnerable. And to share what we need help with.

Sometimes we're aware, sometimes we're not. But to be open to exploring those spaces allows that confidence to build and to be reflected by the people who expressed their confidence in us. And so early in coaching, or prospective coaching conversations, it's important to establish that vulnerability between us.

And I say that “between us” meaning mutually, I can't just show up and say: “Hi, be vulnerable.” That's unlikely to be effective. And the best way I know to encourage that vulnerability is to be vulnerable myself.

None of us are perfect. All of us need help. I need help with things just as much as anyone else does. And because I'm a coach doesn't mean I have all the answers.

I have a lot of experience, I have made a lot of mistakes from which I've learned, that I'm happy to share the results of, and I've learned a lot from the experiences of others, as we were just discussing.

But my objective in a coaching relationship is to support the person or people who I'm coaching. And it's only supportive if the person I'm working with feels that it is supportive.

And that's why in my work, for the last 11 years now, I have offered a guarantee to every single client that they will perceive that our relationship is valuable. And when I send them an invoice for the work, they can pay that full amount or any fraction thereof including nothing if they perceive no value.

There's some vulnerability there. But I do that, not just for the vulnerability, but for the alignment. I am not a fan of hour-based pricing, because I don't want someone to be wondering, is it worth having a conversation? Am I gonna get my money's worth for that?

I'm guaranteeing that they'll perceive value over the course of the relationship. And they can cancel it anytime, pay any amount that they feel it is worth. But my objective is to support them.

And through that vulnerability, we align our interests. I'm showing that the thing that I'm looking to amplify is the growth and strength of the person I'm working with. And that kind of vulnerability is one of the ways that I hope to encourage the people I work with to share not the same vulnerability but a different vulnerability, so that we can work together towards their objectives without a concern that I may have different objectives such as, you know, burning consulting hours, which I don't enjoy that, I don't charge for that. It's not about the amount of time. It's about supporting the objectives of the person I'm working with, people I’m working with.

Ekaterina (Sam) Howard

I'm probably biased because I am US-based and a lot of what I see online is driven by the US startup culture. But it feels like, first of all, there is an expectation that founders, regardless of whether this is the first time they build something or not, should have all the answers. And you're right, you need to be confident to move forward. Otherwise, why would others trust you as well? And at the same time, what I see online all the time, is the expectation that if I'm going to have a coach, or if I'm going to hire a consultant, you better have all the answers. Which seems like a very unrealistic expectation on both ends. So my question would be, are you seeing that? Or is it just my social media bubble? And how have you been experiencing that? Over the years? Has it changed at all?

Alan Albert

Yeah, it has. I'm originally from the US, I now live in Canada, I've had clients in other parts of the world as well. Every place has its own culture. And that culture is distinctive to the environment that people grow up in, and it influences us and it encourages certain behaviors and discourages others.

My role is not to say this is the right behavior, or the wrong behavior. Everyone is a free agent, it's their job to choose what they think is best. In my role as a coach, as well as as a trainer of teams, I try hard not to tell my clients, this is the right way to do things. I don't know that it's the right way. I know that it is a good way. I know that this is one of many good ways. And there are multiple paths.

And in life, we never get to do a true A/B test, where you can try something and it can work or not work. But we never know what would have happened had we chosen another path along the way. We sometimes think as though we learned from those kinds of experiences, but it’s not a controlled experiment.

And so yes, we are encouraged to act confidently. My own journey from a person who did not act confidently towards one who projects and feels more confident is reflective of the environment that I worked in, that I grew up in.

And I discovered that people actually enjoy communicating with someone who is confident. And that confidence when expressed, encourages as I was describing earlier, encourages that confidence to be reflected back towards them. And that amplifies it.

Nothing is more confidence-boosting than having people confident in your abilities, in your success, and your ability to create the success that we're all trying to work towards together.

And it's, it's a good thing to have confidence but it's also a good thing to be able to express those doubts.

One of the greatest characteristics of successful product people is doubt. The ability to question one's own thinking and to question the thoughts and ideas and plans of others in order to compare, contrast, and consciously choose which is the best way.

And if we're only holding on to the first idea that comes to mind, we've put ourselves in a position where we're at a fork in the road, we can either continue to believe what we've always believed, and ignore evidence to the contrary, or we can learn.

And sticking only with, and considering only what we currently believe, is another way of saying “I refuse to learn.”

And without learning, we can't grow, we can't get better. And given the choice, I would rather consider alternative perspectives, and work to integrate those ideas into a more multi-dimensional perspective that includes the viewpoints of others. Just because someone thinks something is different from me does not mean that they are wrong, or I am wrong, we can both be right. And just be viewing things from different angles.

Ekaterina (Sam) Howard

So in a situation when founders need to make decisions, product strategy, otherwise, how is this skill useful? Isn't it better to just have that vision and by God make the market listen to you?

Alan Albert

We do celebrate that perspective. And people are attracted to people and companies that boldly go and make big bets. And when they win we celebrate them, and when they fail, we enjoy laughing at their foolishness.”Of course, they should have known.”

And to go out publicly, with a doubt is not, as I was saying earlier, is not our path towards being perceived as on the path to success.

And yet, if our underlying assumptions upon which our success is predicated, are wrong, or not fully accurate, we're putting ourselves at risk of going boldly down a path that may not succeed or may not reach its potential success, because we got it wrong.

And the ability to question that, and to remove the ego angle of it for part of the time, and say, “What we might not be seeing here? What information am I depending upon as being true that I could possibly test before I go out and proclaim that this is the direction and maybe find out that I was right, but maybe find out if there could have been a better way?”

Success improves when we choose the right path.

And our ability to choose the right path depends on our ability to consider multiple paths.

Otherwise, we're just picking the first idea that comes to mind and we can act confident about it. But that doesn't make it true.

<<BREAKOUT>>

Ekaterina (Sam) Howard

I love that this came up in our conversation because I've been low level obsessed with Annie Duke’s How to Decide for years, it feels like, and I feel that this is a very underappreciated step in decision-making: spending more time on developing decision-making criteria and listing possible solutions than on agonizing over which solution to pick.

There's a lot of pressure to move fast and keep going, but what’s the point of moving fast if it’s not the direction we actually want?

<<INTERVIEW>>

Ekaterina (Sam) Howard

So that's something that comes up a lot in my work: assumption based strategy. For a startup team, how can they kind of internally troubleshoot this and understand whether it's useful doubts and we really need to understand if those assumptions need to be validated, versus we're just spiraling because it's scary to move forward.

Alan Albert

There are some great tools available to us. Our mind is sometimes our greatest asset, and sometimes it's our worst enemy. Our assumptions are rarely front and center. When we have a belief in a particular strategy or an idea for a new feature for a new product, we tend to see evidence for what we believe and we tend not to see evidence for things that might cast that belief in doubt.

And that's well documented in the research.

Anyone who thinks that just because they believe it, it must be true, is considering themselves an exception to the rest of humanity.

And so the hardest part is actually recognizing and identifying what those assumptions are, what must be true in order for this path to, to actually work, or to be the best path.

And so the identification of assumptions is a useful exercise for an individual and even better a team to go through. We're often better at observing the assumptions of others.

And so bringing in others on the team to have a brainstorming session, you know:

  • What's wrong with this picture?

  • What would make it go wrong?

  • What has to go right?

– in order for this to work. And finding ways of identifying the assumptions upon which our strategies, ideas, features, products, product ideas, are built upon, then provides a path towards testing them.

And this may be a bit contradictory of the standard way of thinking about things, but the best way to test an idea is not to test the idea, but it's to test multiple ideas.

We tend to celebrate the oh, you know, build it, measure it, and then decide.

And yet, if all we do is consider that one thing, we're gonna get data, some data will support it, some won't. We'll look at the evidence supporting it and say, Yay, we've got evidence supporting it, it must be right. And then it doesn't prove to pan out. And we wonder why: we had evidence. Everybody said they liked my product. But that does not necessarily mean that they would buy the product. And so asking what needs to be true in order for this idea to succeed, and considering alternatives.

So rather than test one idea, why not test all the ideas? Either have infinite hypotheses, or zero hypotheses, is a faster way to reach success than one at a time. Very controversial. But I've seen lots of evidence of this in the wild.

Ekaterina (Sam) Howard

Could you give us an example?

Alan Albert

Sure. Oh, if you just look at how A/B tests are typically done, someone has an idea when we have a feature, and someone says, oh, I, I know how to make it better. We'll do, we'll do B or we're currently doing A and we’ll now do B.

So in the best of cases, we build B. Say it takes just one sprint, and we're a pretty, pretty fast team. Two weeks go by, and we have this new B, ready to test. And then we test a controlled experiment, A versus B. And we've got a lot of customers and we can get significant enough data to be significant in just one more sprint. Okay, so now a month has elapsed, and we've got data to compare A and B.

What happens is that if you look at a lot of A/B tests, seven out of eight of them fail to show any significant difference between A and B. Inconclusive. It's just not different, different enough to be confident that B is better than A or that A is better than B.

So it takes seven out of eight of these month-long A/B tests to reach one piece of learning. I would say learning one thing every eight months is a slow pace. And I believe that we can do better.

Ekaterina (Sam) Howard

I would agree.

So in terms of assumptions that are just a huge red flag, in your opinion, are there any that stand out, and this is the point you need to stop and think of multiple alternatives.

Alan Albert

Yes. So Teresa Torres has a great model, the Opportunity Solution Tree model, that, particularly when deployed through a path of continuous discovery, not just a one-time experiment, is really good at figuring out not just “is my idea a good one?”, but “what is the best best path to achieve a particular objective?”

And so, if we can start by rather than, “Hey, I have an idea,” first ask the questions. What are we trying to achieve here? What is the business objective?

Are we trying to – and you know, I often ask companies this – what is your business objective? And when they often will get an answer where we're trying to grow. Alright, are we trying to grow, get [market] share? Are we trying to grow revenue? Are we trying to grow profit? Are we trying to grow the number of customers? Are we trying to grow the number of daily or monthly active users? What kind of growth are we seeking? Are we trying to grow by attracting new customers or by retaining the ones that we've got?

And unless we are clear about what we're trying to do, it’s like trying to hit a bull's eye. But without knowing what the target is.

Ekaterina (Sam) Howard

Can we just do everything at once?

Alan Albert

I'm going to assume that that’s…

Ekaterina (Sam) Howard

Yes, I’m just snarking.

Alan Albert

It is hard to hit multiple targets at once. And often they are in conflict with each other [like] maximizing profit and maximizing market share, are rather different objectives that lead to different strategies towards achieving them.

And so once we've identified that business objective, we can look at our customer journey.

And say, ”Where in the customer journey, can we exert the greatest impact on achieving that objective?”

So is that by attracting more people to the top of the funnel? Getting more people to start a trial? Getting more people successfully through that trial? Getting them to start paying? Getting them to continue paying? Getting them to upgrade? Getting them to stop leaving and churning and dropping out the bottom? Getting them to refer and recommend the product to other customers?

There's always one point in the customer journey that can have the greatest impact.

So consciously choosing that in light of our business objective is a way to ensure that we're spending our time doing the most important thing, identifying the greatest opportunity.

Anytime a company is spending [time on] something that is not the greatest opportunity, they're actually harming their ability to achieve the greatest opportunity.

It's sucking resources, time, attention away from it. And by focusing on what we're trying to achieve, we are more likely – seems sort of obvious – but we're more likely to achieve it. And so we can identify the objective, we can identify the point in the customer journey where we can have that greatest impact.

And then ask the opportunity solution tree question that Teresa Teresa Torres asks: “In what ways, emphasis on the plural, might we change what's going on here, so as to achieve our objective?”

How might we increase the number of people who are visiting our site? How might we increase retention?

Questions like that, and not stopping at the first idea that comes to mind.

Ekaterina (Sam) Howard

So in that case, is the goal to find the right way? Or is it to test a multitude of ways?

Alan Albert

The first step is to identify a multitude of ways.

Once we have in mind a way, we tend to defend it, and point out all the ways in which it will achieve our objective. And they may be true.

But there may be a better way to achieve that same objective and unless we consider them, we're not in a position of having a likely identification of the best way.

So if we're only considering one idea, chances are that that is the best idea is quite small.

If we consider multiple ideas, it increases the likelihood that we're able, assuming we're smart and we evaluate properly and without bias, we're able to select a best way, the best ways to test so that we're just increasing our chance of success by considering multiple paths.

One question that I love to ask is, ‘Alright, we have this objective, how will we know a good solution when we see it?”

So rather than start with the solution, and compare it to the objective, start with the criteria that describe that ideal solution. And then compare multiple ideas against those criteria.

So, I mean, what does the ideal feature [look like]?

Is it easy to learn? Or is it better to be easy to use? Or is transaction speed? The fastest being able to go through it quickly? Or is the processing speed behind the scenes the most important thing? Are we optimizing for trust, for happiness or convenience?

We can't optimize for all of these things at once.

But if we can identify and then prioritize the factors that make a good solution, when we see it, we're actually going through and we're putting lines that gradually surround the solution space.

And often we resist constraints on a design. But those constraints, those criteria, those factors that make a good solution, when we see it, the more constraints we have, the better.

Because when we're having more constraints, more criteria that define a good solution and having clear priorities make our solution space smaller, and smaller and smaller, such that when we have an idea that's within that solution space, we know that it meets our criteria. It's either within our solution space or not. And the tighter it is, the more likely we've pinpointed the right and best solution.

<<BREAKOUT>>

Ekaterina (Sam) Howard

This is technically not a growth hack, but it should be!

Ultimately, restricting the amount of options you’re willing to consider saves you so much time and energy.

This also applies to copy. One of the reasons why writing your website copy can seem impossible is not having those constraints in place. And this is why copy briefs are such an amazing way to really focus your writing on the things that your prospects care about.

<<INTERVIEW>>

Ekaterina (Sam) Howard

So how does customer research fit in?

Alan Albert

Oh, my goodness. Without it, we're just guessing. And in my experience, when teams guess, they tend to guess wrong more often than right. You know, 40 years ago, before the advent of Agile, of the Lean software movement, the advent of what Marty Cagan calls the Product Operating Model – with empowered teams and doing continuous discovery.

Before all that the failure rate of new startups, of new products, [and] of new features was by different measures around 80%. Some said 70, some said 90, some said as high as 95 – a high failure rate. And then along come all these advances in our ability to build and design and test new products.

And now, the failure rate is exactly the same.

It hasn't budged. And so if Agile and lean and the Product Operating Model are so good, wouldn't we expect that the average to have moved a little bit? And yet there's no evidence that it has.

And this is a question that I have that's been tugging at me for a while now.

Because I see teams that are doing the things. They've got empowered teams. They do all the things that are lean. They're Agile, They build, measure, learn. And they still are having a high failure rate.

And our culture has gone so far as to celebrate that failure rate – “fail fast.”

Sounds like an encouragement to fail. And the nice thing, and there's plenty of evidence of this, those advances in our product processes and mindsets have increased the rate at which we can fail, and as a result have lowered the cost of failure.

And that's good. That's worth celebrating.

But I feel that we ought to be able to do better. I mean, in what other industry, would we celebrate a failure rate above 70%?

There aren't many, certainly not construction, surgery. You know, like you want your plumbing fixed. Oh, well, we'll [try… but we fail to] fix it 70% of the time. That's not going to succeed, and yet we celebrate that in product.

And I think that's giving ourselves a pass, when we might question whether we actually are succeeding, and whether there's a better way.

Ekaterina (Sam) Howard

So the pace of failing has sped up considerably. But the pace of learning has not.

Alan Albert

The pace of learning may have increased. The rate of success? I haven't seen evidence.

Ekaterina (Sam) Howard

So we're just learning wrong things.

Alan Albert

Well, what we're learning is not helping us build better products.

[What we're learning isn't decreasing our failure rate, isn't increasing our certainty that what we build will be valued, and isn't increasing our pace of building successful products.]

And the waste that comes from building the wrong thing, and then fixing it and fixing it and fixing it is really costly. I mean, from a climate perspective, even the amount of time, computing time, energy spent to build the wrong thing is incredibly costly.

And then when it's even when we release it, and as part of the product, having that feature, that particular design, out there in the marketplace, consuming resources and failing.

And the people who are using it are failing, they're struggling, they're not achieving what they're doing. It's a ripple effect with enormous costs in terms of waste, waste of energy.

The lean manufacturing movement was put in place to reduce waste in the manufacturing process. If you encounter a problem, you don't fix the problem, you fix the source of the problem.

With lean software, build, measure, learn, but we're starting with building, what are we building? How do we know that's the right thing to build?

I think build-measure-learn is a great cycle to go through. But starting with building is the most costly way to go through that cycle.

Why not start with measuring: measure, learn, then build, rinse and repeat?

We can skip the build cycle for most of the learning that a team needs to do, and gain enormous efficiencies as a result.

Ekaterina (Sam) Howard

So what would measuring look like in that case?

Alan Albert

So when we talk about measuring, we often talk about research, also called discovery.

And we typically talk about two kinds of discovery.

There's problem discovery and solution discovery.

  • Are we solving the right problem?

  • Are there people who actually have this problem that is worth addressing?

  • And then of all the ways that we could solve it, are we doing it? Are we solving it, providing products and features that actually address that problem, in a good enough way to succeed in the marketplace?

In my experience, those are great things to do.

Most teams do more solutions discovery than problem discovery, to their peril.

Because there is no such thing as a great solution to a non-problem. And if people don't care enough about the problem to want to pay for it, you can have a fabulous solution. It doesn't matter how good it is. People still aren't going to buy it because they don't care.

And so the importance of problem discovery is grossly overlooked, underappreciated.

And companies can do a whole lot in terms of cutting out the waste in their building cycle by first not confirming that a problem exists, but testing whether a problem exists. A difference in assumption here.

It's easy to find examples of people saying they want something.

  • How do we test whether they want it?

  • Will they actually pay money to do it?

  • Have they actually spent money to do it?

  • Have they actually spent time on a do it yourself model?

  • If they haven't, how important of a problem is it?

That is discoverable is discoverable quickly and efficiently?

And Alberto Savoia, no relation to me, Alan Albert, has a great model for pretotyping, as he calls it, finding a way of testing an idea prior to building it.

Great ways of doing that: getting real evidence that real people will spend real money on a not yet real product, or feature.

But besides those two areas of discovery, problem discovery and solution discovery, an area that I believe is highly overlooked / underappreciated, depending on which side of it you're looking at.

And that other area of discovery, I call value discovery.

So for there to be a problem for which a solution is available and people to buy that solution, behind that there is a reason why people consider that thing a problem, and a reason why people would value a particular solution.

And understanding that why, in my opinion, is – and the evidence that's the companies I've worked with, who have discovered that why – discovering that why is so impactful, that it affects everything downstream from there. Rather than building a feature that addresses a problem, or a product that addresses a problem, we can instead build something that provides the value that people are seeking.

And when people see that value, that they don't have, and that they do want, they're attracted to it, and they buy it.

And by understanding the why, and by amplifying the delivery, of satisfying and delighting that why, we can cut out the building of stuff that people don't really care about. And devote those resources to amplify the delivery of that identified value in a way that just blows away the competition.

Ekaterina (Sam) Howard

For example?

Alan Albert

For example, I hate using Apple as an example, because it's an easy example. But it also happens to be an excellent example here.

If we transport ourselves back to I think it was 1998. The typical laptop cost $1,000. And it had a keyboard, a screen, four USB ports, an ethernet port, removable battery, CDROM. And they were successful.

And then along comes this new product. And it lacks so much of what those $1,000 laptops had. It had the slowest processor in its class, it didn't have any ethernet port. Instead of the four USB ports that were standard, it only had one, it didn't have any DVD drive, CDROM drive, none at all. The battery was not removable. And it was released to mediocre reviews. And it cost 80% more than a typical $1,000 laptop.

That's a bold move, coming out with a worse product and charging an 80% premium.

And yet that laptop, the MacBook Air introduced by Apple, became the best selling laptop of all time.

And when I saw that, I thought alright, why not? I bought one. I knew why I bought it. But I looked at why people were buying it. And the easy answer is “Oh, the Apple brand.”

But the Apple brand was not that strong in those days. No, they were struggling for survival and were coming out of that struggle. But it was not a “Oh, it's an Apple product. It must be good. I'm gonna buy it.”

If you were looking for a laptop, and you wanted it to be light enough to carry with you all day, and you didn't want to carry around the power cord, you wanted the battery to last long enough so you could use it all day, there was only one product in the marketplace that did that.

And that was the MacBook Air. And people often say that Steve Jobs did not value research. And it's true. He did not value market research.

But he did his own. And he spent hours at Fry's Electronics, watching people buy. Watching people buy laptops, watching them pick [it] up, watching them ask questions, hearing what they were asking about, and I wasn't there, but I can well imagine that he heard people saying to themselves, “How heavy is this? Can I carry it around with me? How long is the battery going to last?” And thinking, gosh, if there were a lightweight laptop that lasted, these people would want it.

And so the MacBook Air is notable because it removed functionality previously considered as essential – absolutely must-have features – in order to amplify the delivery of value, satisfying that why in a dimension that was previously not considered either important or achievable.

And that pattern has repeated itself, time and again, in every single disruptive innovation that I have been able to think of.

It's a test and it's a challenge to you and anyone who's listening.

Every single disruptive product has removed functionality previously considered essential, in order to amplify the delivery of value in a dimension that was previously considered either unavailable or unimportant.

This happened with digital cameras, they were worse than film cameras in every single dimension, poor resolution and poor color rendition. shutter lag, they're actually quite terrible. And yet, if you wanted to put those pictures on the Internet, where the resolution and the color rendition actually didn't matter all that much, they were so much better than film. You didn't have to wait to finish the roll, you didn’t have to send it off for developing, you didn’t have to scan it. You could just take out the card, put it in your laptop, and upload it – so much easier.

And in order to achieve that, they had to let go of the functionality previously considered as must-have.

And that happened again, with smartphone cameras which were worse than digital cameras. If you wanted to share socially, why wait to get back to your laptop and upload it, why not just press the share button.

<<BREAKOUT>>

Ekaterina (Sam) Howard

That really sounds counterintuitive, because in most cases the first instinct is to add more stuff. After all, this is how feature wars start.

But at the same time, adding new features doesn’t always translate into adding value.

For example, I record these breakouts in Otter first as voice notes to myself – and I really don’t need summaries, comments or outlines – or actually any kind of text analysis AI functionality – on top of the transcript.

When I signed up for Otter, it was a perfect match for what I wanted to be doing. But over the years, the features have been drifting far away from what I need. And, honestly, all of those new features lead to no value add for me. Go figure!

Be that as it may, there is a huge gap between identifying a need and actually validating that there’s enough people out there willing to pay for it. Coming up next: how to handle that so that you’re not just looking for a confirmation that your next startup idea is awesome.

<<INTERVIEW>>

Ekaterina (Sam) Howard

I want to talk about the part where things go wrong in my experience the most often: talking to prospects, customers or folks that you hope to target with your product once it exists. And not realizing that not only do people lie, sometimes they lie in such sneaky ways that you just can't even read this. And so you get the data that then translates into a product that they don't actually want.

Alan Albert

Yes, yeah, there's lots of ways that teams can and do go wrong. I have been just as guilty of these ways as anyone else. Simply knowing it is not insulation from doing it. But it's a certain real advantage.

That awareness allows us to question ourselves, to question our teammates, to consider alternative approaches.

Some of the ways that I and others have gone wrong are, I mentioned this earlier, starting with an idea and seeking evidence that people like it. We're gonna find evidence. The question is: is there sufficient evidence? Or is there evidence that there's a better way?

And unless we consider that, we're unlikely to see it. And so rather than starting with an idea, start with discovery:

What do people care about?

What is that why that is motivating their desire for a better world?

What are those whys, plural? Multiple whys.

So starting with an idea, I think is step one on a highly risky path. I wouldn't say the path to failure, but your chance of failure is far higher, if we hold on to our first idea, and only seek evidence to support it and don't consider alternatives.

Another way that I and others have gone wrong, is in believing that in order to learn we have to build. This is costly, it's slow. And it's highly inefficient in discovering what actually will motivate the customers to do the kinds of behaviors that we want them to do in order to achieve our business objectives.

So by thinking we need to build in order to learn, hat ties into the build, measure, learn framework. We go wrong and we start with the build. If we can start with measuring, measuring not the product, not the idea, not the problem, but measuring the customer, figuring out the why within that customer, [what] are those whys, those multiple reasons why they may be dissatisfied with the status quo, and may be more attracted to a future in which things are better?

Better in what ways? How will they know a good solution when they see it?

Doesn't matter what I think, what the people on the team think, what the person who had the idea thinks, what matters is what's in the mind of our customers. And so that is discoverable.

But, again, as you're alluding to, it is easy to go wrong and seek evidence that the reasons why we think people might want something are there.

But in so doing, we may fail to discover that there are other more important and stronger reasons why they might want that feature or why they otherwise might want it except for that one thing that will keep them from buying entirely. All of that is within their own minds. And it's our job to discover that.

<<BREAKOUT>>

Ekaterina (Sam) Howard

Let’s pause for a second and unpack this.

First of all, it is super easy to start taking what our interviewees are saying at face value. And this is not to say that people are just deliberately trying to manipulate us. But there are rules of engagement, so to speak, so most people are not going to go out of their way to be unpleasant to you. Or they might feel like it's not worth being seen as confrontational. And then the next layer is that sometimes people need some time to figure out what makes them solve a particular challenge in that very specific way. Or what makes it worth solving it in the first place.

And sometimes, asking follow-up questions can be an easy way around this, as long as you remember to do so. But it's not just that, it's also being able to actively listen – and stay engaged with their story.

Especially if you’re a founder with a tech background, hearing “empathy” may be triggering a full body cringe. But it’s not that scary – as you’re about to find out.

<<INTERVIEW>>

Ekaterina (Sam) Howard

So in our previous conversation, you mentioned that empathy was one of the key ingredients to getting to that point of deeper whys that are not surface level, kind of like first thing that comes to their mind. Empathy is this fuzzy word that for some founders, like, why do I need to do the touchy feely stuff, can I just talk about the features already?

Alan Albert

That was me to a T. I would never have been accused of being the most empathetic person in the room. And yet, when I began doing research, I realized that I was not really connecting with the people who I was interviewing, I wasn't having a conversation, I was going through a list of questions. I was getting answers, but I wasn't sure they were true.

I was filled with doubt. And I wondered whether there was a better approach?

Ekaterina (Sam) Howard

How did you realize that? How did you get to the point where you weren't sure that those were the answers you could trust?

Alan Albert

A couple of ways.

One is, I was naturally filled with self doubt. I was not a confident person. And I always doubted. But especially when I do research, I discover things, I act upon those discoveries, and the product does not succeed, that's a clue that something went wrong.

And failure, as unpleasant as it is, is a fantastic teacher. And the desire not to fail, again, is strong. And if we lean in to that appreciation of the value of failure, we can learn, we can doubt, we consider alternatives why [this] did not work and look for reasons why it might go wrong.

And there's some great books about ways that research can go wrong, and how to conduct better interviews. The Mom Test by Rob Fitzpatrick is an excellent resource. I hate the title of the book, but I love the content. And it includes lots of ways about thinking about how we ask questions affects the kinds of answers that we get.

But I was keying into empathy, because I felt that I was not connecting with the people who I was interviewing. And I wondered whether I might be able to gain deeper insight by connecting at a deeper level. And even by testing whether what I was learning was accurate.

And there are all kinds of different definitions of empathy that are out there. Different types of empathy. It's a fascinating topic. I went pretty deep and exploring it from an intellectual perspective (shows where I was coming from). And one of the definitions of a sense of empathy that I initially was attracted to was well, empathy is when you think or feel the way the person that you're connecting with is thinking or feeling.

But I also realized that [was tapping into] what I think. And it's so easy to fool ourselves. How do I know if there's empathy?

And so a better definition of empathy, I believe, is not when I believe that I think and feel the way they think and feel, but whether they believe I'm thinking and feeling is the way they're thinking and feeling. And so I began including at the end of research, what I labeled to myself an empathy test, and I would echo back what I had learned from the person I'm speaking with, and expressing, in my own words, not theirs, what I interpreted as their thoughts and feelings as they went through a decision making process, whether to buy or not to buy, whether to buy this product versus that one, how much more this product was worth than that one? Why this one was worth more, worth paying more for [or] no, it wasn't, I would rather have the money in my pocket.

And this part is good enough. And that's really the part that I want, we could learn all those whys.

So I would echo those whys back to the person I was speaking with, as a test. And I would get the same kind of “Yeah, yeah, that's right,” that I would get when I would show a product or feature to someone – “Do you like my product?” But that's akin to asking, “Do you like my puppy? Do you like my baby?” You're not going to get a no.

So I began inserting an error in my echo. And so I would describe the reasons, the values, the motivations, the concerns, the feelings that I had heard them say, and I would insert a subtle but noticeable misinterpretation into my echoing of what I had heard, intentionally wrong.

And invariably, they would correct. They didn't want me to leave with the wrong impression.

And I found that by appreciating the correction, and asking for more, I had demonstrated the vulnerability of being wrong, and the desire to be corrected and educated.

And I would learn more things. And I would encourage [them] more, “What else did I miss? What are those other subtle things? How exactly did you go about deciding what you know? How important was trust in your decision making process, compared to the desire to get a solution soon?”

And we can learn these subtle things, and learn what we should be amplifying in our product in order to express those values. And what we should be sacrificing on our product in order to amplify the things that are truly important.

And by expressing empathy, by showing caring about what that person is thinking and feeling and appreciating the correction, we can encourage more.

And so the process of interviewing is less about asking questions and receiving answers, than it is about listening attentively and appreciatively. And not nodding excitedly when they say they like it for the reasons we think they should. And kind of frowning when they say things that don't align [with] it. They react to that. They're empathetic people too.

And by instead appreciating every single piece of information they share and seeking more, one of the best ways of rewording an interview question is not to ask “what” but “what are the ways” to assume that there are plural answers, because there's always more than one thing.

It's not just one thing. And if we stop at the first thing, we're not learning all those other things. And so every single question should assume that there are multiple reasons, that some of them may be in conflict with each other, or in some cases this, and other cases that.

Surveys are terrible in this regard, because they force a choice of some things and prevent the “Well, yes, it's A, except when B is there, but only in this other situation.” [answers, so the] context is missing. And it keeps us from discovering.

We can only discover what we know to ask about. Whereas in an interview, or in an observational setting, we can see things that are surprising. And that's what we're seeking in discovery.

Ekaterina (Sam) Howard

If we're seeking. Sometimes we just want to validate our idea.

Alan Albert

It is so tempting, it rewards our ego. But the best thing that can come out of research is a surprise. That means we saved ourselves a surprise in the marketplace.

Ekaterina (Sam) Howard

I love that reframe, I feel like in many cases, research can only be accepted as a thing you need to do if it promises certainty. And it's not the outcome in most cases, in most cases, it's like, “Well, we thought this was true. But actually…”

Alan Albert

Yeah, I mean, there's two kinds of research, there's evaluative research, how many people do this versus that. And all that evaluative research is great. It's useful, but it's mostly useful during the optimization phase, after we've got the feature there and we’re tuning and tweaking.

Discovery is different. We're not looking for evaluating, we're looking for insight, things that we did not think or know about prior to doing so.

And it's not a matter of how many, or what it's a matter of why, and how, and seeking those surprises, as a way of reducing our risk in introducing a product and then having those surprises happen.

It's risk reduction.

And when we have a high failure rate, reducing the risk of failure is a great thing to do.

And it is inexpensive, compared to the cost of failing.

It's easy to ask questions that get the answer that we're seeking. It's harder, but well worth it, to first identify what we want to learn, what we are trying to learn, and then focusing the interview just on that learning.

Rather than interview about the entire lifecycle of the customer, the entire customer journey, we can learn so much more, so much faster, by focusing one interview on one point in that customer journey: what led them to decide they needed to buy something, what led them to choose this one product over another one, what led them to decide they no longer needed it anymore, what led them to abandon the onboarding process.

We can learn all of those things.

If we interview a bunch of people about the entire process, we're gonna get gobs of data. Gobs of data are great, right?

The problem is what's labeled the flaw of averages. We lump all those learnings together, and we average them, and we get a mess. The average of a cat and an orange is not a thing. And if we lump what people care about when they're using the product with what they cared about when they were buying the product, we're actually removing insight from our process.

So [focusing on] one thing at a time will make it a lot easier to interview people, increase the signal to noise ratio, very few people need to be interviewed in order to gain insight. You hear the same thing three, four times about that same process:

“I got to this point. And I said if I'm not gonna go further, because oh, people have a problem with this, what can we do about that?” That moves into the solution discovery.

But we need to first understand the values, why do people do what they do. And by focusing on what is the decision they made, and what led them to do it, we can gain so much insight.

But a big tip here, when we're looking to discover why, the worst word I know to put in that question to learn why is that word – “why.”

It is toxic, that word “why,” to learning why.

First of all, there's an assumption that there is one reason we talked about asking questions in ways that encourage multiple answers.

When we ask “Why did you do that?” people will have to think of all the things they did, what was that why, and they may or may not know. They may not be able to recall quickly. They're gonna say something, but it might not actually reflect the reason why they did it.

There's multiple reasons.

And so there are plenty of other ways we can go about getting at that why, without jumping to the thing that we want to learn.

There is a distinction between what we want to learn, and how we want to learn it.

And I encourage researchers to first write down all the things they want to learn. Don't worry about the wording, just what do you want to learn? What are the questions – don't even have to be in question form? What do you want to learn?

And then for each of those things that you want to learn, first, make sure that you actually need to learn that and it's about one part of the journey. And then ask, if I want to learn that, what are the ways – plural – in which I might ask that, and which of those ways is most likely to get at the deeper insight that I'm seeking?

Too often, we succumb to what I call question drift. Oh, it's easier to ask it this way. So [we] ask it this way. But it's actually a different question. We've drifted away from what we're trying to learn. And so by starting with, what are we trying to learn and then how might we learn it, we can do a better job of wording the question in ways that will elicit deeper plural answers that take context into consideration.

You know, “What kind of food do you like?”

Well, where, when, in the morning, or in the afternoon, at home, or in a restaurant, when you're alone? Or whether you're with friends?

Context matters.

And so, including the context in the question, the context that the person you're speaking to actually went through, not “what would you,” “what did you?”

Ekaterina (Sam) Howard

Yeah, hypothetical questions. They do not make me happy, no.

Alan Albert

I would go to the gym, tomorrow. I would have a salad for dinner.

And yet, we tend to be poor predictors of our own future behavior. So only talk to people who did the thing that we want people to do. If you want to discover why they did it, talk to those who recently did it.

<<BREAKOUT>>

Ekaterina (Sam) Howard

There are two schools of thought around who should be running interviews. Some folks argue that it’s easier to have consultants conduct interviews, because they will be perceived as independent third parties, making it easier for customers to be honest about their challenges and frustrations. Some folks argue that bringing interviews in-house is better in the long run, because then it’s possible to run ongoing calls and stay on top of customer research.

Either way, someone needs to run those calls. And my hypothesis at the time was that especially for male founders, because of the pressure to be the perfect visionary founder who doesn’t need research, it may be much harder to conduct interviews focused on exploration versus confirming assumptions.

I thought that this would be kind of like letting sales do discovery calls. They won’t be able to resist this temptation to turn it into a sales conversation, as we’ve discussed with Lucy Heskins earlier.

But that would be too simple, wouldn’t it?

The answer is actually “Being a good interviewer is a skill anyone can learn.”

<<INTERVIEW>>

Alan Albert

We can all get better. And I really enjoy working with interviewers, regardless of their gender, to take them from where they are, and help improve. To get better at those skills.

I myself had to go through an empathy training journey, to figure out how to do that empathy thing in a way that elicited open, honest, introspective, vulnerable answers.

I was not naturally good at it, I was not naturally good at talking to people.

And so it took a considerable effort to change the way I spoke, to change the way I listened, to change the way I moved and held my body when conducting an interview of someone else in a way that would be perceived as open and accepting, and appreciative of whatever answer they gave, equally appreciative of every single piece without being repetitive and boring.

It's like anything else. Empathy is a skill. It's like a muscle. And if we don't exercise it, it tends to atrophy. But we can build it up.

And we can continue to improve and add to the skills that we have at eliciting open, honest, answers. Introspective answers, answers that the person we're speaking with, may not have had in their mind prior to the conversation, helping them do some discovery, some self discovery.

You know, we assume that you know, the person we're talking to has the answers. That's why we're talking to them. They may not have it.

And so it's our job to help them lead themselves on that journey that they had been through when they did those behaviors, to examine the things that they did do that led them – not why they did it, the things plural, that led them – towards the thing that they were attracted [to], to the things that may have pushed [them] back, and like maybe I'm not sure I want that, because that also means this thing.

We can learn those forces that propel us towards the new thing as well as the forces that lead us away from that new thing in a way that encourages the people we're talking to to do that self discovery. As we're speaking with [them].

We both have that shared interest in understanding “Why did I do that?” Yeah, so whether that's gendered or not, I think we all can improve.

And some people, I would say are naturally good at it, it is their nature to be that way. I've tried to learn and to try on the behaviors that I see them doing. And it's actually I found it amusing, at first, but also empowering, that when I did the behaviors, that people who I see and perceive as being particularly empathetic, when I try on those behaviors, I was shocked that people responded to me, as if I was empathetic, as if I cared.

And the follow-on effect, kind of sent shivers down me, I actually began feeling those feelings of empathy that I hadn't had before, I began caring more.

And by trying on those behaviors, it gets reflected towards us. And those things get shared with us. And we get to appreciate the feelings, the motivations, the values that people were caring about. And we can try them on as well.

How does it feel to be in that situation, deciding “Should I do this or should I do that?”

“When was that feeling most intense? What did we care about then?” These are all enjoyable experiences to have, that provide us with insight. Going back to [the] more business side of things, into the customer journey, and what leads people to take that next step, that's empowering, as empowering as a dialogue between two people can be, back to the practical matters in achieving our business objectives.

The funny thing is that if we treat an interview not as an interview of going through a list of questions, okay, next question, next question, but as a conversation between two people seeking a connection, it turns out that the interviewees really enjoy the conversation.

That conversation when we're doing discovery, that conversation is about them and what they care about.

And I've had people in an interview break down and cry towards the end of the interview, not because they were unhappy, but because they were so appreciative.

A woman said, “nobody ever asks me what I care about, I'm so happy,” and she was just crying. And I welled up, like nobody had asked her what she cared about?! How awful. But to be able to provide her with that experience, I was grateful for that.

And so we help each other. Everybody can help somebody else where everybody needs help. And so, if during the course of an interview, we help people understand things about themselves, that's great, too.

It's an exchange of good value in both directions.

Ekaterina (Sam) Howard

Thank you so much.

Alan Albert

Thank you.

<<EPISODE RECAP>>

Ekaterina (Sam) Howard

The easiest way to recap this conversation would probably be to say that all you need to do is try harder, and you’ll be able to get into the discovery mindset and magically start solving typical startup problems differently, without wasting time and effort, with the help of empathy and better customer insights.

There are so many tips in this interview that can contribute to your startup’s growth, from spending more time on defining the problem and what a feasible and desirable solution looks like to being very conscious of the way you come across when you run customer interviews.

And yet, change takes time and a lot of effort. So my hope for you is that you’ll be coming back to this episode and / or transcript in the future to keep implementing these tips.

<<BONUS CONTENT INFO>>

Ekaterina (Sam) Howard

Sign up for my newsletter, The Lightbulb Moment Club, to get new episode notifications and bonus content. Next week, it’s going to be a list of folks you can follow to keep getting better at interviewing your customers and making sense of that data.

<<NEXT EPISODE>>

Ekaterina (Sam) Howard

The next episode is all about messaging and copy: what you need to have in place so you can start working on your messaging, how B2B SaaS startups can actually use storytelling and what storytelling even means for B2B SaaS, and how developing a different brand voice can help you stand out in a crowded market.

<<OUTRO>>

Previous
Previous

[S1E5] Diane Wiredu, Lion Words: Not everyone gets to be the Liquid Death of B2B SaaS — and that’s OK.

Next
Next

[S1E3] Talking about (most common) early-stage startup mistakes with Lucy Heskins. Because nobody should have to *cry in marketing* alone