Recruiting and Screening Candidates for User Research Projects


Recruiting participants for research studies is a difficult task: you have to attract interested participants, schedule times to meet for the study, remind them to come to the study, and then hope that they do, in fact, remember to come to their scheduled session. To make matters worse, sometimes participants who do show up are not good candidates for the study because they simply do not have relevant life experiences to contribute meaningful feedback or insights, even with their best efforts. Suboptimal study participants negatively affect the quality of your research and of your design decisions.

Screening surveys (also known as “screeners”) are questionnaires that gather information about candidate participants’ experiences to:

  1. quickly identify and prioritize optimal candidates that are representative for your target audience
  2. exclude any candidates who may not be a “good fit” for your research study

In this article we discuss the importance of screening in the user-research recruitment process and how to incorporate it into your recruitment strategy.

First, for any user study, you want to make sure that you recruit people who are representative for your audience. In order to do so, you need to be aware of the fact that the recruiting process may be biased towards certain types of participants.

For example, whenever you are using a remote-testing platform which includes its own participant panel,  you may run into “professional testers” — people who make a significant portion of their income by participating in different kinds of user research. While these participants are not inherently “bad” for a study, their motivations may lead to behavior that skews research results. For example, some may answer questions or perform tasks very quickly (or not at all) during unmoderated tests, rather than making an honest effort to do them in a realistic way. Others may know what researchers are looking for in user tests and deliberately exacerbate feedback to respond to the perceived study needs.

While there is no foolproof method to evade professional testers, you can alleviate the problem by further vetting screening-survey answers to determine whether they are reasonable and honest  (e.g.,  participants did not type “abc” instead of a meaningful sentence). To be clear, the intent behind screening surveys is to alleviate some of the manual work of vetting candidates, but some effort is still required (whether by the researcher or a professional recruiter) to ensure that quality candidates get selected.

Similarly, if you rely upon your personal network to recruit participants (a “convenience” sample), these people will already have somewhat of a relationship with you and may feel reluctant to give honest negative feedback. Testing with coworkers can also bias results because they may be familiar with the project, the organization, and even with different types of user research and their goals.

In general, you should also screen out UX professionals (or those who are “UX-adjacent” with an interest in interface design) since they will be too sensitive to UX issues and more likely to offer an expert review than realistic user feedback.

Even if your customers are “everybody” (that is, they draw from the general audience and cover a mix of genders and ages), you need to make sure that your study has external validity — namely, that your participants match the goals and interests of your audience. For example, someone who is not at all into sports or hiking may not be too motivated to shop on an outdoor-equipment website.

How to Screen Participants

1.  Define eligibility criteria.

First, you and your team should identify participant criteria for your study. Think about both the demographics of your target audience and its goals as they use your products. (For a thorough breakdown of this process, check out our free report How to Recruit Participants for Usability Studies.) These criteria will determine your recruitment strategy and your screener.

If you use automated-recruiting platforms, be careful about overly restricting your survey by having extraneous elimination criteria.  For example, say you were concerned about professional testers and you included an exclusionary question like “When did you last participate in a research study?” You may exclude more participants than necessary, like second-time participants who just happened to participate in a completely different type of study that same month.

In a similar vein, in an attempt to recruit marketing professionals, you might choose to accept only individuals who select Advertising and Marketing as their industry. However, marketing professionals are not exclusive to advertising and marketing agencies and, more often than not, they do marketing functions within nonmarketing organizations (like grocery or apparel stores). Thus, it would be more productive to accept people from multiple industries who have the term “marketing” within job titles or descriptions.

2.  Construct your screener.

When writing your questions, consider using open-ended or multiple-choice questions to avoid giving away the study’s intent.

For example, a yes/no question like Do you play video games? hints that the study might be related to video games and that the desirable answer is yes. However, if you phrased the question as Which of these activities have you done in the last 4 weeks? with a list of options like hiking, reading, shopping, and video games, then the intent of the study is less obvious. Having distractor answers keeps respondents honest and prevents them from gaming the questionnaire.

Place your most important exclusion criteria near the beginning of the survey so that you can quickly eliminate obvious misfits without wasting their time.

Survey logic can expedite the screening process, so consider picking a survey tool with strong branching capabilities.

3.  Pick your strategy for recruiting participants.

You could find participants for your study in a number of different ways. Each method has its strengths and weaknesses, and, depending on your study’s research questions, you may want to recruit from multiple sources.

  • Professional recruiting agency

    There are many professional recruiters whose sole purpose is to help UX or market researchers find qualified research candidates; most will even take over some of the work of interacting with prospective and final participants (e.g., scheduling, communication, payment). These agencies have a relatively wide reach in their ability to recruit “general audience” participants (e.g., a mix of genders, backgrounds, financial status, and ages).

    They are especially useful for addressing highly specialized recruiting needs or find specialized user groups (i.e., finding participants with a certain disability, or someone with a very specific type of context or background). They often vet participants in advance, a practice that results in a low no-show rate. Consequently, they are often more costly than automated tools or leveraging existing pools of users.

     

  • Automated recruiting platform

    There are many online platforms for user research on the market, and some of them offer automated screening capabilities or recruiting services. If you target a “general audience,” then this method could be beneficial, due to its relatively low cost, ability to outsource some recruiting work, and its relatively quick turnaround (due to automation).

    However, with an automated recruiting platform, there is a greater risk of recruiting “professional testers” and sometimes screening-configuration options (like survey branching and logic) are limited within the tool.

     

  • Internal panel of existing users

    Many companies will opt to recruit from their existing userbase, and establish a pool of willing volunteers who are willing to either try out new features (e.g., A/B split testing or beta testing) or participate in research opportunities in general. This approach can be great for recruiting experienced users of a specific product or for getting insights about employee-facing products.

    This method has lower immediate recruiting costs (both monetary and time-related): candidates are already semiqualified and there are no external recruiting fees. You would, however, most often need provide participants with some compensation — monetary or of some other type such as products — for their time. There is also the cost of creating a user panel and maintaining it, as well as doing the work of scheduling and coordinating participants. Some larger companies have a dedicated in-house research recruiter — staff hired to create and maintain the user panel and respond to various internal research needs as needed.

    The reach of an internal panel is limited to participants who already are familiar with your brand and offerings. Most likely, with such a recruiting pool you will not capture new-user perspectives. There might also be some sampling and confirmation bias due to participants’ existing brand loyalty — since they already have a relationship with the brand, they may already like it and provide positive feedback

     

  • Online forums and groups (e.g., discussion boards, professional networking groups, or other social media groups)

    When the existing userbase and automated panels fail to provide the degree of specialization required, turning to online groups and forums can identify participants who have certain experiences, interests, or backgrounds. These groups can allow you to address specialized recruiting needs at a lower cost than with recruiting agencies or platforms.

    While these can help provide a group of highly motivated participants, the reach will also be narrow and limited to the members of those groups, which are not always representative of all individuals in the targeted audience. Particularly when the groups themselves have certain predominant viewpoints or vocal community members, there is a higher likelihood of sampling bias or groupthink. There is also going to be more time and effort required to communicate with participants and establish eligibility. If you decide to contact members of an internet group or social-media platform, make sure to request permission in advance from the group moderator.

     

  • Intercept studies (or “hallway recruiting”)

    If you ever heard or read the phrase “Would you be willing to take 5 minutes participate in a quick survey?” you know what an intercept study is. These types of studies can be done virtually (via a popup or modal dialog), on the phone (via interactive voice response (IVR) systems), or in person (in a shopping mall, office hallway, or coffee shop). They are ideal for recruiting visitors or existing customers and for finding participants with a specific goal or task in mind (like using a specific feature). Recruiting can thus be automated for unmoderated studies or surveys.

    Unfortunately, these studies can also result in some wasted researcher time (for moderated or in-person sessions, the researcher has to be available and ready to go whether a participant is found or not). Depending on the specificity of your needs, they might have greater turnaround time — for example, it may be hard to recruit 100 participants for a quantitative study of people who subscribe to a newsletter. Like internal panels, such studies (especially if online) can be subject to sampling and confirmation bias because they involve customers who have already decided to interact with the brand.

Recruiting Method

Reach

Cost

Effort

Time

Bias Risk

Professional recruiter

Wide

General users

Specialized users

High

Low

Med

Low

Automated recruiting platforms

Wide

General users

Med

Low-Med

Low

Low

Internal user panels

Narrow

Existing users

Power users

Employees

Low

Med

Low

Med-High

Online forums and groups

Narrow

Specialized users

Low

High

Med

Med-High

Intercept studies

Narrow

Visitors (new/existing)

Task-oriented users

Low

High

Med

Med

Recruiting Method

Cost

Effort

Time

Bias

Risk

Professional recruiter

High

Low

Med

Low

Automated platforms

Med

Low-Med

Low

Low

Internal user panels

Low

Med

Low

Med-High

Online forums and groups

Low

High

Med

Med-High

Intercept studies

Low

High

Med

Med

 

4.  Adapt your recruitment strategy and screening survey to attract the right participant.

Time-poor or high-earning professionals (like night-shift or swing-shift workers, executives, doctors, or lawyers) may need high incentives to justify time spent away from work or their personal lives. They also might be less likely to spend a lot of time filling out a lengthy screener survey.

 

5.  If you are recruiting two or more types of user segments for the study…

You can attempt to use one screening survey for all these sets of criteria, but there are tradeoffs:

  • On the plus side, participants don’t have to fill out the survey more than once, which increases the likelihood you’ll get a sufficient sample size.
  • You may also end up with a pretty complicated, branching screener or may need to manually filter candidates even after they filled out the screener

For example, let’s say you wanted to recruit two user types: power users who do not work in the technology sector and novice users who work in tech. You could have survey logic that routes tech professionals through the novice qualification criteria and routes nontech professionals through the power-user qualification criteria. Or, you could ask everybody whether they work in tech and take them through both the power-user and novice questions; then look at the answers and manually decide whether they fit your combination of requirements.

6.  Review the survey responses of both qualified and disqualified candidates before finalizing your approved list of participants.

(If you work with a recruitment agency, ask them to provide all the filled screeners for you or at least those of the near matches.)

By reviewing the survey responses submitted, you can identify any close-fit study applicants that you may want to consider as a backup candidate if they have most, but not all the criteria for your target audience.

For example, if your target audience was parents with multiple children, a primary–caregiver aunt or uncle may have been disqualified, yet acceptable for the study. Alternatively, some participants may have been screened out because they picked one of several possible answers (e.g., they may have answered Android to the question Which type of phone do you own? even though they own both an Android and an iPhone).

7.  If you’re unsure whether eligible participants are truly a good fit for the study (because, for instance, your recruit is highly specific)…

It might be worth breaking the study into two parts: a 15-minute screening interview and a 30- or 60-minute research session. This screening interview can serve two purposes:

  1. evaluating candidates to clarify their screening-survey responses and validate whether they are a good fit for the interview
  2. preparing the selected participants for the main study session by reviewing key logistic details (e.g., consent forms, communication methods, expectations for participating, device requirements, applications that need to be downloaded, and other setup work like account creation).

8.  Finally, notify your fully qualified and approved participants, and begin scheduling sessions for your study.

Avoid scheduling anything with semiqualified candidates unless you know for sure you will invite them to the study. Otherwise, once you promise a spot, you are technically obliged to provide compensation if you cancel the appointment.

No matter who your users are, you must screen research participants to ensure you are using your research time and budget wisely. After all, your design decisions will only as good as your data is. By recruiting representative study participants, your team can reduce bias and build experiences tailored to your specific users’ needs.



Source link

More To Explore

Share on facebook
Share on twitter
Share on linkedin
Share on email