Conversion copy for growing B2B SaaS startups

View Original

Early-stage startups: are you making these survey blunders?

How to get honest and thoughtful answers to your survey questions and prevent biases from derailing results — with the power of science!

After seeing a yet another “Please help us with our research” survey start with a “Please enter your email” field, I realized that a lot of startup founders are losing out on opportunities to learn about their target audience through surveys because of survey design that acts as a repellent to anyone but the most motivated respondent.

If you’ve ever:

  • Made all of the survey fields (including demographics!) required...

  • Started the survey by asking for an email without spelling out how you’ll use that email address...

  • Designed a survey without any jump links, even though some of the questions would not be applicable to all of the respondents...

  • Added “What’s your age” as a drop-down list 18 to 99, instead of age ranges...

  • Used up 100% of the welcome screen talking about your amazing product...

...head over to part 1.

To make matters worse, it is very easy to let respondents to lead you astray, either by introducing your own biases, or by discounting the biases that are likely to affect your respondents’ responses (“How often do you read to your kids?” — prime suspect in the age of COVID overwhelm).

If you’ve ever:

  • Focused all your questions on “What” while ignoring the “Why”...

  • Set up single-choice or multiple-choice questions without offering “None” or “Other” options...

  • Brainstormed all the options internally, without so much as a single interview or a review mining session…

  • Expected honest and candid answers on sensitive topics (from relationships to parenting)...

...read part 2.

This post is all about  a) getting more survey responses and b) making sure that the data you get back is *meaningful,* so that your startup is set up for growth — with the power of science

See this content in the original post

In this study, a deep dive into online labor pool markets shows that self-identification questions are not enough.

Apparently, a significant number of respondents were trying to game the system by answering self-identification questions in a way that would get them into the survey — and one week later, answered the same questions differently.

*why, why, why?!*

Since questions around identity are *not* the way to go, some alternative approaches:

  • Survey with a verified panel (paid, but at least you know you can trust the results)

  • Survey with invited users (if relying on personal networks, responses are likely to be biased — see Mom Test)

  • Paid survey with questions that are *not* identity-based to avoid introducing that wrong incentive

  • Unpaid survey relying on the welcome page to enable potential participants to self-segment themselves based on their interests (the challenge here is to get enough responses)

See this content in the original post

There are multiple survey tools out there that offer an opportunity to speed up the research process by offering access to verified panels — or to assemble your own panels with screener questions.

Either way, you want to be able to attract as many respondents as your budget allows in the shortest amount of time.

To do that, you need to be able to answer this question: “What do verified panel participants want?”

This study identified 10 factors that impact whether or not panelists take on a specific survey.

While payment is the primary factor, there’s slightly more to it, depending on which group the participants belong to. According to the study, panel participants fall into one of these 3 groups:

  • Mercenaries

  • Decliners

  • Regular respondents

Based on the study results, these groups have slightly different motivations. The best bet to maximize the panel participation is to take into account all of following factors:

  • Speed of completion

  • Topic interest

  • Ease of completion

  • Software functionality

  • Topic knowledge

  • Benefit to others

  • Impact

  • Relationship with brand / organization

  • Respondent’s opinion valued

Some of these are fairly obvious — like preference for speed and ease of completion and functional software (“I want to fill out a clunky survey” — said no one ever), but other factors are frequently overlooked, for example respondents’ desire for meaningful work (making an impact), or their desire to feel like a valued expert (opinion valued and topic knowledge).

You can make your survey more desirable for those groups by adding a study description, goals, and expected impact as a part of the survey description — and it shouldn’t take you longer than 30 minutes.

See this content in the original post

Filling out a survey means spending some extra time and effort on sharing information about your lifestyle, habits, attitudes, or challenges with some random online brand.

What could possibly make it worthwhile for a respondent to do that?

Incentives to participate in a survey (discount, raffle, chance to win a gift card, or anything else that makes sense for your startup) are all important — and, if you can afford to offer them (and build in ways to screen out prospective participants that don’t fit the desired profile), will help you increase the response rate.

But there are other ways to get more responses — and they work.

This study on mail-survey response rates approaches survey completion rates as a persuasion, not incentive challenge.

Adapting the hierarchy of effects approach, they suggest that instead of the standard AIDA (Attention - Interest - Desire - Action) process, AICR (Attention - Intention - Completion -Return) is a better fit.

This means that respondents decide to fill out the form because this goal is aligned with their preexisting attitudes — which reinforces the intent to stick with the survey.

See this content in the original post

Interestingly, neither how busy the respondents were nor how long the survey was impacted the survey completion rate.

In terms of online surveys, this means that taking the time to present the survey and introduce the incentive are more likely to result in more survey completions than trying to keep it short and removing open-ended questions.

Once you overcome the challenge of getting enough respondents to have a reasonable sample size, the next challenge is making sure you get usable responses.

Spoiler alert: everybody lies.

See this content in the original post

If you are reaching out to random strangers online to determine whether or not there’s enough interest in a startup idea — or if your goal is to better understand the habits and attitudes of your target personas, nonresponse bias is less of a problem: if someone is not part of your target audience, then their feedback is not likely to be of interest.

But what if you have drastically different levels of engagement across the sample you’re trying to survey (think users, group members, list subscribers)?

What insights are you missing out on when you don’t hear from people who don’t want to respond to a survey?

How can you prevent customers from churning, if they won’t tell you what’s wrong?

According to this study, variable incentives may be the right answer: after identifying non-responding segments, follow-up outreach initiatives involve offering a monetary incentive to respondents that were underrepresented in the sample.

In this case, the goal was not to gather different insights from the non-engaged group; rather, it was to have everyone (or at least enough participants) provide responses to the same questionnaire.

This is not always going to be the best approach for startups, where it can make more sense to dig deeper into specific challenges faced by each segment or use case, instead of focusing on response rates.

However, if you’re gathering baseline data, not having all member segments represented makes responses, from DEI assessments to NPS score, essentially vanity metrics. To find out what all types of users think, try higher incentives for underrepresented groups.

See this content in the original post

Let’s start with this (mind-boggling) quote from a study on measuring job satisfaction by asking respondents about it in an interview:

“The common empirical finding that women care less about wages and prefer to work fewer hours than men appears largely an artifact of survey design rather than a true behavioral difference.”

What influenced responses related to job satisfaction in that particular study:

  • Self-image

  • Social acceptability of responses (see below)

  • Characteristics of the interviewer

  • Presence of family members during the interview

  • Cultural differences

These underlying factors may affect your interviews and surveys as well, especially if you’re researching a niche that has strong societal expectations, such as parenting.

And, unfortunately, responses are likely to be biased by what respondents consider to be socially acceptable, even if respondents are filling out a survey: impression management — changing answers to look better to others — happens when respondents are interacting with others or assume that their answers will be tracked.

Voting, exercise, seat belt use, interest in buying organic foods, and even having library cards have been overreported over the years (as mentioned in this study). And, apparently , when participants know that their data will be shared publicly, they will stick to the more socially acceptable responses — even if you promise that they will not be identified.

See this content in the original post

Fortunately, scientists also have some suggestions on how to reduce impression management bias in surveys:

  • Maintaining anonymity

  • Adding info on confidentiality

  • Statements explicitly encouraging honesty

  • Disguising a survey’s purpose

See this content in the original post


See this content in the original post

Research is never completely done, but that doesn’t mean a survey can’t help you reach your growth goals faster.

While there’s a lot of bias towards making each survey as easy to fill out as possible, there’s always the danger of over-simplifying the survey to the point that the results become nearly useless.

This is why I’m very much in favor of pairing single-choice or multiple-choice questions, as well as evaluation grids, with open-ended questions.

For example:

  • “You gave us an NPS score of… Can you share the reasons why?”

  • “You rated this event as… What were the reasons you gave it that score?”

  • “You rated {{value prop}} as the least important factor in choosing {{product category}}. Can you share some of the reasons you gave it that rating?”

“I recently went through the exercise of gathering survey design best practices and modifying our existing research methods accordingly.

I think surveys can be helpful to understand current experiences and reveal challenges/opportunities for new solutions.

One of the best outcomes I've seen with surveys is when you provide multiple choice or binary questions and allow for users to elaborate with open text fields, and see users actually use that space to provide detail. The ability to pair quantitative data with qualitative responses gives the data so much meaning and consequence, and can demonstrate that you've identified an unmet need.”

Laurel Marcus, Growth at adyn

In addition, not having “Other” or “None of the above” options included in your multiple-choice questions is likely to either be perceived as frustrating (“This doesn’t apply to my life”) or mean that you’re missing out on some real pain points or challenges.

Watch  “Data-Driven Copywriting for Brand-Spanking New Products” to find out more and see how to combine surveys and review mining to get your launch copy in order (review mining can do *a lot of things for you*).

Not convinced? Here’s a Copyhackers tutorial on writing a long-form sales page based on open-ended survey responses (who says you can’t do that for your homepage or feature pages?)

See this content in the original post

Add memes to break up the survey and add a little fun to the mix (read more here)

  • Get more in-depth responses around brand perception to take it further than “Describe in 3 words…” with projective techniques (If X were a car, what kind of car would it be?) (read more here).

I help B2B SaaS startup founders and marketers get more traction with research-driven conversion copy — without slowing down their growth initiatives.

Hire me for:

  • Website audit to find & fix conversion blockers

  • Day rates to optimize your landing pages, web copy, or email sequences for more clicks and signups

See this gallery in the original post