Yes! Sample size matters. And there’s no one size fits all. It’s not (only) how big it is, it's how you’re going to use it.
Yes - it’s a bit primitive rhetoric but so is how serious the general business person takes uncertainty in data, unfortunately: I often experience clients asking me to segment survey results into way more subcategories than the number of respondents can validly bear.
That is why you need to think about your desired reporting and the importance of the business decisions you want to make based on your research: sample size should be highly correlated with the impact your conclusions will have for your business.
Thus, when strategic decisions like brand position or customer segmentation needs to be undertaken, your sample size should be higher than when you are doing research for more tactical and operational decisions like package test or concept evaluation.
It is therefore a good idea to have a clear view of the dimensions of importance in a given analysis and the method applied so that sample size is fitted well. That being said there is no magic number.
In this post I take you through some of the fundamentals for deciding the sample size for your quantitative market research surveys.
Let’s start by defining what a sample is:
a sample is a subset of a full population. But it isn’t necessarily a representative sample.
The composition of the subset depends on your research objective: are you researching the general population or your specific target group(s)? What is the right thing to do, depends on the research type you are conducting. I touched upon that theme in my latest blog post about screening with a meaning.
Regardless of your sample composition, there is one thing you will always have to accept some level of:
Because the truth is - paradoxically - that there is never one, univocal and indisputable truth when it comes to market research. That is why the big question when determining your sample size is: how sure do you need to be?
The general statistical uncertainty connected to sample size is (almost) independent of the population size; it is a diminished marginal lowering uncertainty as sample size increases.
Alas, uncertainty basically means how much can I trust the data average, mean, and what’s the error on the parameter estimate. You use this uncertainty for comparison against other averages in the sample or a historic sample to see if something is significantly different.
The general distribution theory gives us parameter estimates and test options so that we can interpret outcomes as to whether they are just random errors in data because of limited sample size or significant due to a real change in a given dimension (opinion, behaviour, perception, belief or attitude - dear child has many names).
When something is significantly different, it means you have eliminated the error margins as being of importance for the change.
Your interpretation of data must include this genetic condition and nature of aggregated and limited subsets of a given population. Doing it wrong can have big consequences and hence the storytelling needs to stay true to the data and uncertainty even if it conflicts with your expected results, current beliefs, confirmation biases, and willingness to risk investments based on the outcomes.
The question is how many respondents do you need to make sure that you can be confident in your survey results?
Well, it depends on the level of uncertainty you are willing to accept.
As I said in the beginning, the sample size for your market research should be highly correlated with a) the impact the conclusions you’re about to make have for your business, and b) the lowest level of analysis; how many subgroups will you divide the survey respondents into in your reporting.
Thus, my advice is that you sketch out your research plan from top line penetration figures down to in-depth target group understanding and specific areas of importance. You can’t have it all without losing focus and quality. And money. Whatever you choose, it's going to cost you. In terms of money or depth of analysis.
While, there is no magic number, there are definitely things to consider, that can help you narrow the range. However, less than 300 interviews is practically never a good idea if you want any validity in your analysis.
My advice is that as a general rule of thumb you should double the sample size if you want to break the survey results down into more than a dual split, meaning four subgroups in your reporting should have at least 600 respondents in your sample, eight splits should have 1,200 respondents, and so forth, basically meaning you have at least 150 respondents for each subgroup.
Furthermore, you need to ensure a specific group size if you know from the onset that you want to analyze results as a split between two or more groups. This is ensured by your sampling strategy and screening questions.
In reality, most analyses I see are between 500-1.000 interviews and rarely above 2.000 interviews.
So answering the question about how many respondents I recommend for a piece of research to be valid, I will always ask you in return: how sure do you need to be? There is no magic number for your sample, but going too low will shift your certain facts to uncertain indications.
Thus, it is always a tradeoff between budget, importance and the validity you need which in turns depends on the type of research you are conducting.
My best advice is to understand the general uncertainty and at least the top two parameter estimates, mean and variance, to not get hijacked by confirmation biases or do quantitative analysis in vain.
Then you choose your sample size according to strategic importance and depth of analysis of your research. A range between 300 and 2.000 interviews should cover most types of analysis.