

First, it is difficult to automatically interpret participants’ natural language responses to an open-ended question due to the diversity and complexity of such responses. However, no existing survey platforms support such interactive features nor do they automatically motivate and guide survey participants to provide quality answers to open-ended questions during a survey.Ī lack of support of such interaction features on existing platforms may be due to two main reasons. One set of proposals is to inject interactive features into an online static survey, such as providing response feedback and probing responses, to improve response quality and encourage participant engagement. To combat survey taking fatigue especially to motivate and guide survey participants to provide quality answers to open-ended questions, several approaches have been proposed. Consequently, survey-taking fatigue adversely affects the quality and reliability of the data collected especially when open-ended questions are involved. However, open-ended questions often induce more cognitive burdens and survey-taking fatigue, and respondents are more likely to skip such questions or provide low-quality or even irrelevant answers. Moreover, open-ended questions help collect deeper insights, such as the background and rationales behind the answers. In particular, open-ended questions allow respondents to phrase their answers freely when the options of responses cannot be pre-defined or the pre-defined responses may introduce biases. Open-ended questions are an important method to collect valuable data and are widely used in self-administered online surveys. The problem is exacerbated with open-ended questions because of the extra time and effort required for formulating and typing responses to such questions. For example, one of the biggest survey platforms, SurveyMonkey, shows that on average, participants spend 5 minutes to complete a 10-question survey but 10 minutes to finish a 30-question survey.

Evidence shows that as a survey grows in length, participants spend less time on each question and the completion rate also drops significantly. Since people are inundated with survey requests, they are unwilling to take any surveys. Research indicates two typical types of survey fatigue.

Third, online survey tools automatically tally survey results, which minimizes the effort and errors in processing the results.ĭue to the extensive use of online surveys, survey fatigue is now a challenge faced by anyone who wishes to collect data. Second, it can reach a wide audience online regardless of their geographic locations. First, an online survey is available 24x7 for a target audience to access and complete at their own pace. Compared to paper-and-pencil surveys, online surveys offer several distinct advantages. With the widespread use of the internet, self-administered online surveys have replaced old-fashioned paper-and-pencil surveys and have become one of the most widely used methods to collect information from a target audience.

In many domains, including human–computer interaction (HCI) research, conducting surveys is a key method to collect data. Based on our results, we discuss design implications for creating AI-powered chatbots to conduct effective surveys and beyond. Our detailed analysis of over 5,200 free-text responses revealed that the chatbot drove a significantly higher level of participant engagement and elicited significantly better quality responses measured by Gricean Maxims in terms of their informativeness, relevance, specificity, and clarity. In this study with mostly open-ended questions, half of the participants took a typical online survey on Qualtrics and the other half interacted with an AI-powered chatbot to complete a conversational survey. To investigate the effectiveness and limitations of such a chatbot in conducting surveys, we conducted a field study involving about 600 participants. The rise of increasingly more powerful chatbots offers a new way to collect information through conversational surveys, where a chatbot asks open-ended questions, interprets a user's free-text responses, and probes answers whenever needed.
