DOI: https://doi.org/10.47989/ir284561
Introduction. The new generation of information technology changes the ways of information seeking. Conversational agents start to be applied in public to support information seeking and decision making and provide a variety of services to users such as healthcare education and consultation. The information quality of conversational agents for healthcare determines the quality of these services, while identifying critical dimensions used to assess the agents’ information quality that helps better strategise priorities for ensuring information quality has received limited attention in the literature.
Method. This study conducted a questionnaire survey to investigate the critical dimensions of information quality of healthcare conversational agents. After excluding two responses from participants who declined to fill in the questionnaire, this study retained 231 responses for data analysis, out of the total 233 participants who initially responded to the survey.
Analysis. The research describes the demographic information of the participants, the behavioural characteristics of using healthcare conversational agents, and the critical dimensions of information quality of the agents perceived by the participants in the survey, employing descriptive statistics. Furthermore, ANOVA was employed to compare the variances in the perceived importance of information quality dimensions between participants who had used a healthcare conversational agent and those who had not.
Results. Understandability and trustworthiness were
the two top concerns for the information quality of the agents from the
participants’ perspective in this study.
Conclusions. Results of the study show that the experience of using or
not using the agents affected the participants’ perceived importance of
the agent’ information quality dimensions.
Based on artificial intelligence and natural language processing, conversational agents understand the questions from users and automatically respond to the users’ request (Jadhav and Thorat, 2020). These capabilities of the agents reduce the users’ time in information seeking and are applied in practice (e.g., libraries, healthcare settings and Internet) to provide information services such as healthcare education and consultation and improve the effectiveness and cost-efficiency of these services (Laranjo et al., 2018). However, the quality of these services heavily relies on the information quality of the agents. It was reported that poor-quality information of a conversational agent given to the patients worsened their mental health (Haque and Rubya, 2023). The importance of the agents’ information quality for healthcare therefore has driven much attention to investigate this research problem.
Information quality is defined as fitness for use and has been divided into multiple dimensions to describe it, e.g., completeness and accuracy (Lee et al., 2002; Setia et al., 2013). While previous studies have looked at information quality issues of conversational agents for healthcare (Abd-Alrazaq et al., 2020; Tudor Car et al., 2020), none of them identified the critical dimensions used to assess the agents’ information quality that are the foundations before attempting to improve the agents’ information quality (Liu et al., 2021). An investigation of the critical dimensions of the agents’ information quality contributes to revealing their priorities and better making decisions on resources’ allocation (e.g., time and money) to address information quality issues, which has received limited attention in the literature. Therefore, this study aims at identifying the critical dimensions of assessing conversational agents’ information quality for healthcare. Furthermore, we tend to look at these dimensions from the users’ perspective (i.e., both users who have used a healthcare conversational agent and potential users who have not) in this study since their perceived quality of the information given by the agents determines whether to adopt and use the agents and provided services. Considering the users’ requirements about information quality when developing and improving a healthcare conversational agent will help deliver better information products and services to ensure effective healthcare-related outcomes and facilitate the agent’s acceptance and use. Accordingly, we propose a research question to guide the present study:
Research Question: What are the critical dimensions used to assess the information quality of conversational agents for healthcare from the users’ perspective?
By using the information quality dimensions of conversational agents for healthcare identified in a systematic review, this study conducted a survey in China to investigate the relevant importance of these dimensions from both users and potential users. The results show that understandability and trustworthiness were the two top dimensions used to assess the agents’ information quality from the participants’ perspective in this study. This study thus includes two main contributions. Firstly, from a theoretical perspective, we add the findings on the user behavioural characteristics of using the agents and the critical information quality dimensions of the agents into the literature and propose potential areas for future research. Secondly, from a practical perspective, the results of this study on the user behavioural characteristics of the agents will be of interest to both information providers and healthcare professionals to gain insights into users’ preferences for using the agents. This understanding helps promote innovation in healthcare services by introducing conversational agents (1) to guide users’ behaviours to realise self-care and (2) to attract more users. On the other hand, our results of the relevant importance of the agents’ information quality dimensions perceived by the participants, will be beneficial to sponsors and developers of the agents in strategising priorities to address information quality issues. By doing so, the agents developed will better meet users’ requirements and provide high-quality services.
The rest of the paper is organised as follows: next, we review the studies related to information quality and healthcare conversational agents; we then describe information quality dimensions of healthcare conversational agents identified from a systematic review; thereafter, we present the research methods used in the study and the results of data analysis; at last, we discuss the research findings and the implications for both academics and practice as well as the limitations of the study and future work before we conclude the article.
Information quality is a multidimensional concept and, although prior studies have investigated general information quality dimensions in the information systems context (Lee et al., 2002; Setia et al., 2013) and the dimensions used to assess the information quality of healthcare websites (Sun et al., 2019), limited attention has been paid to investigating the critical information quality dimensions for healthcare conversational agents. Since the previously identified information quality dimensions might not be all applicable in the context of healthcare conversational agents and the agents might have other information quality dimensions that are not included in the prior studies (Lee et al., 2002; Setia et al., 2013; Sun et al., 2019), we will explore the critical information quality dimensions of the agents in the present study, contributing to better development and improvement of the agents for their usage and providing quality-assured services to users.
By enabling efficient access and personalisation, conversational agents that mimic human conversations through various communication channels like speech, text, facial expressions, and gestures, are increasingly deployed into service encounters, such as education, banking, entertainment, and healthcare (Ling et al., 2021; Mariani et al., 2023). Conversational agents can be categorised into three groups: chatbots without physical presence (e.g., language-teaching chatbots), virtually embodied avatars (e.g., avatars for product recommendations), and physically embodied robots (e.g., robots providing healthcare assistance to the elderly) (Van Pinxteren et al., 2020). Although conversational agents have made significant technological advancements and have the potential to enhance social presence in automated service encounters, people remain doubtful about the information provided by these agents in practice, especially in healthcare where a decision can even make a difference between life and death (Ashish and Saini, 2020). This lack of trust could be due to that poor quality of the information provided by conversational agents. This study thus investigates the critical information quality dimensions of healthcare conversational agents to reveal core areas for enhancing their information quality.
Recently, researchers focused on improving technical performances and users’ experiences of healthcare conversational agents (Bérubé et al., 2021; Damij and Bhattacharya, 2022). A few research efforts have paid to the agents’ information quality issues (Abd-Alrazaq et al., 2020; Tudor Car et al., 2020) that can affect the agents’ technical performances and users’ experiences, while an empirical investigation on the critical information quality dimensions of the agents from the users’ perspective is still lacking in the existing literature. Such a study will benefit academics and practitioners to realise relevant importance of the agents’ information quality dimensions and better strategise priorities to address information quality issues in the process of development and improvement of the agents.
By conducting a systematic review (Liu et al., 2023), we identified seventeen dimensions from the included forty-five empirical studies to investigate the information quality of conversational agents for healthcare. We labelled the included forty-five studies in the systematic review (see Appendix 1) by letter S followed by a number to highlight the references identified in the systematic review. The information quality dimensions identified in the systematic review are shown in Table 1 (in the order of highest to lowest frequency based on their used times identified in the systematic review). These information quality dimensions that have been explicitly mentioned in the included studies in the systematic review thus serve for this study to investigate their importance and identify the critical dimensions from the users’ perspective.
With the list of included studies, we formulated a data extraction form using a spreadsheet. We then proceeded to analyse and code the responses from the text of each selected study, relevant to describing the agents’ information quality dimensions and their issues, employing the content analysis technique (Khan and Qayyum, 2019). Firstly, we identified specific sentences and paragraphs from the included studies in the systematic review that described the dimensions and their issues. Secondly, we examined the extracted text multiple times to comprehensively understand its content and extracted semantic units. Thirdly, we abstracted these semantic units and assigned preliminary codes. Fourthly, we consolidated and categorised codes based on their commonalities and distinctions. Lastly, we developed themes based on the underlying meanings in the content for each category. Considering this, we synthesised all the extracted data under information quality dimensions, and these findings on the dimensions and their issues are described after Table 1. To establish the definitions of information quality dimensions from the included studies in the systematic review, we further underwent a two-step process. Firstly, within each dimension, we analysed the extracted sentences describing the dimension form the articles and synthesised the information to establish a clear definition for it, capturing the essential keywords used in the extracted sentences. Secondly, we reviewed and refined the definitions to ensure their alignment with the meaning conveyed in the source material. See Table 1.
No. | Information quality dimension | Definition | Support literature |
---|---|---|---|
1 | Accuracy | The information is free of error. | [S1, S7, S8, S13, S22, S24, S25, S26, S27, S28, S30, S31, S44, S45] |
2 | Appropriateness | The information is applicable to a given task. | [S3, S8, S9, S14, S16, S21, S22, S24, S29, S30, S32, S36, S39, S41] |
3 | Empathy | The information is able to address users’ emotional desire. | [S1, S6, S10, S15, S24, S33, S39, S41] |
4 | Timeliness | The information is up to date. | [S16, S23, S29, S30, S31, S35, S40, S44] |
5 | Completeness | The information is presented completely. | [S1, S7, S32, S39, S41, S44, S45] |
6 | Consistency | The information is presented consistently. | [S17, S27, S29, S36, S39, S41] |
7 | Helpfulness | The information is helpful to a given task. | [S3, S31, S32, S34, S42, S43] |
8 | Clarity | The information is presented clearly. | [S7, S9, S17, S36] |
9 | Trustworthiness | The information is worthy to believe. | [S1, S7, S37, S45] |
10 | Relevance | The information is relevant to a given task. | [S7, S24, S27, S45] |
11 | Repetitiveness | The information is repetitive. | [S17, S23, S33, S38] |
12 | Accessibility | The information is easily accessible when needed. | [S21, S35, S38, S40] |
13 | Conciseness | The information is presented concisely. | [S11, S21, S30, S36] |
14 | Realisability | The information is presented in accordance with reality. | [S16, S21, S32, S36] |
15 | Understandability | The information is easy to understand. | [S3, S32, S35, S38] |
16 | Likability | The information is delightful. | [S3, S29, S37] |
17 | Amount of information | The number of information items displayed are needed to manipulate by users. | [S5, S11] |
Table 1. Information quality dimensions of healthcare conversational agents and their definitions
Accuracy is free of error in (a) information detection from user inputs (Goh et al., 2021; Yang et al., 2021), (b) grammar/wording used for responses (Rose-Davis et al., 2019), (c) answering users’ query (Boczar et al., 2020; Nadarzynski et al., 2019), and (d) diagnostic results generated from interaction with users (Mujeeb et al., 2017). Failure to recognise or transcribe users’ inputs can result in the delivery of inaccurate information to users (Goh et al., 2021). Additionally, inaccuracy may also occur when the agent presents information in awkward wording or with poor grammar (Wang et al., 2021).
Appropriateness addresses the information recognised by an agent in the conversations with users that matches users’ requests (Boczar et al., 2020; Comendador et al., 2015; Denecke et al., 2018). Additionally, this dimension involves providing contextually relevant responses (Gaffney et al., 2020; Thompson et al., 2019) and using appropriate language (Beilharz et al., 2021). The information quality challenges related to this dimension include instances of misunderstanding user inputs (Boczar et al., 2020), offering inappropriate responses, and being unable to respond altogether (Kocaballi et al., 2020).
Empathy highlights the importance of an agent’s response information that acknowledges users' emotions (Palanica et al., 2019) and addresses their mental health concerns (Nadarzynski et al., 2019) while using respectful language for sensitive issues (Nadarzynski et al., 2019; Barnett et al., 2021; Kocaballi et al., 2020). The issues related to empathy encompass a lack of empathetic responses (Nadarzynski et al., 2019) and indicate instances where the agent fails to comprehend or convey human emotions (Barnett et al., 2021; Kocaballi et al., 2020; Nadarzynski et al., 2019).
Timeliness emphasises the significance of an agent’s responses that are generated promptly (Goldenthal et al., 2019), delivered within the desired time interval (Griol and Callejas, 2016), and kept up to date (Huang and Chueh, 2021). The issues pertaining to this dimension arise when an agent fails to provide timely responses and presents outdated information, leading to incorrect responses being sent to users (Elmasri and Maeder, 2016).
Completeness addresses whether the information provided by an agent includes all necessary messages (Goh et al., 2021; Kocaballi et al., 2020) and offers in-depth details (Thompson et al., 2019). This dimension also deals with the sufficiency of information to meet users' needs (Huang and Chueh, 2021; Nadarzynski et al., 2019). Issues related to completeness may arise due to inadequate or incomplete responses (Nadarzynski et al., 2019) or the display of missing words (Goh et al., 2021).
Consistency refers to the information provided by an agent that is free of contradictions (Balsa et al., 2020) and presented consistently in a specific context with consistent wording (Comendador et al., 2015). The issues related to consistency include incongruences in backward and forward conversations (Balsa et al., 2020), information overload and confusion (Kocaballi et al., 2020), and the annoyance caused by inconsistent terms used in responses (Danda et al., 2016).
Helpfulness refers to that an agent can guide users in its usage (Denecke et al., 2018) and provide information that addresses users' queries (Thompson et al., 2019) and specific healthcare issues (Abdullah et al., 2018). The issues related to this dimension involve the provision of excessive or unnecessary information (Denecke et al., 2018) and the presence of unhelpful information for users (Thompson et al., 2019).
Clarity comprises two aspects: (1) the interface components of an agent that are clearly worded, and (2) the artificial intelligence-generated responses of the agents that are presented clearly, along with additional explanations (Balsa et al., 2020; Gaffney et al., 2020). Information is deemed unclear when awkward wording is used, leading to user confusion (Gaffney et al., 2020).
Trustworthiness looks at the reliability, believability, lack of bias, and support with references in artificial intelligence-generated responses, presented with authority and without advertisements (Goh et al., 2021; Kang and Wei, 2018). Issues related to trustworthiness involve information that is inaccurate, unbelievable, and unfair (Kang and Wei, 2018), as well as information presented with bias (Goh et al., 2021).
Relevance focuses on the artificial intelligence-generated responses of an agent that are in relation to users’ inputs, a specific scenario and users’ tasks (Goh et al., 2021; Rose-Davis et al., 2019). Thus, irrelevant responses are undoubtedly the issue under this dimension (Rose-Davis et al., 2019).
Repetitiveness pertains to the information provided by an agent being repetitive. For instance, an agent repeatedly asks a user the same question daily to monitor their health status (Ly et al., 2017). It also involves the agent giving the exact same answer to the same question, regardless of when or who asks (Ly et al., 2017). Additionally, repetitiveness is linked to the issue of an agent mechanically providing questions or responses within a set time, causing users to feel like they are experiencing repetition (Fitzpatrick et al., 2017).
Accessibility relates to the ease of accessing required information. This dimension concerns both the ease of accessing health-related information when users need it (Beilharz et al., 2021; Griol and Callejas, 2016) and the presence of additional references that support the provided information, accessible through extra links (Beilharz et al., 2021). It also addresses the accessibility of information about how to use an agent (Griol and Callejas, 2016). Information quality issues within this dimension arise when users cannot access or face difficulties in accessing the information (Beilharz et al., 2021; Griol and Callejas, 2016).
Conciseness pertains to the information provided by an agent, presented in a simplified and compact form (Beilharz et al., 2021), using simple language (Elmasri and Maeder, 2016). Additionally, when users desire more details on concise content, it allows them to explore further (Beilharz et al., 2021; Elmasri and Maeder, 2016). The presence of big chunks of text would reduce conciseness, leading to overwhelming information for users (Beilharz et al., 2021).
Realisability means that the artificial intelligence-generated responses of an agent simulate human typing delays, language, and understanding level of a realistic counsellor displayed in conversations (van Heerden et al., 2017). It also involves providing factual advice to users instead of using softened information (Thompson et al., 2019). The realisability issues pertain to responses that appear unrealistically fast and the use of overly formal language. This potentially makes users feel like they are not talking to real people (van Heerden et al., 2017).
Understandability involves three aspects: (a) using language that is easily understandable; (b) providing artificial intelligence-generated responses that are easy to grasp; and (c) ensuring the meaning of the information is easily comprehensible (Thompson et al., 2019). The issues under this dimension are related to information provided by an agent that leads to user misunderstandings. Meanwhile, users have expressed complaints that the agent offers surface-level information, failing to meet their needs (Ly et al., 2017).
Likability pertains to the artificial intelligence-generated responses that exhibit fun, friendliness, and kindness (Comendador et al., 2015; Kang and Wei, 2018). Consequently, users may express their liking and pleasure in response to such interactions (Kang and Wei, 2018). The issues within the likability dimension involve instances where an agent provides unfriendly and unkind responses, resulting in responses that are likely to be disliked and unpleasant for users (Kang and Wei, 2018).
Amount of information relates to the number of information items provided by an agent that participants need to manipulate in working memory (Chen et al., 2020). The study reported that the number of information items requiring manipulation impacts working memory and cognitive loads (Chen et al., 2020). On one side, when the amount of given information exceeds the limits of working memory and cognitive loads, it becomes too much information for users to process (Chen et al., 2020). As a result, users might find that the information does not match their requests. On the other side, if there is too little information provided, users might consider it insufficient to meet their needs (Chen et al., 2020).
Given that surveys help acquire characteristics, opinions and prior experiences of one or more groups of people by asking questions and tabulating their answers to provide the information about the topic of interest (Leedy and Ormrod, 2019), this study therefore used survey research design to collect the data and identify the critical dimensions of assessing healthcare conversational agents’ information quality, from the users’ perspective. In this study, we employed an online questionnaire to collect the data from subjects as it saves travel expenses and lengthy long distance telephone calls if subjects are not situated in the local area (Leedy and Ormrod, 2019). We developed a survey instrument that includes two parts. The first part contains the questions about the personal basic information (e.g., gender, age, educational background, and occupation) and the participants’ behavioural characteristics of using healthcare conversational agents. The second part asks the subjects about their opinions on the importance of dimensions used to assess the agents’ information quality, based on a five-point Likert scale ranging from 1 (Strongly disagree) to 5 (Strongly agree). These dimensions were identified and gathered by the systematic review. The survey questionnaire was revised through several iterations according to the comments from scholars with expertise in survey design, healthcare conversational agents and information quality management and a pre-test by a set of the agents’ users. These revision iterations contributed to the content validation of the survey instrument. This also showed that the participants in the pre-test of the survey questionnaire were able to comprehend and answer the questions and provide their feedback for questionnaire improvement. Accordingly, our potential participants would have a better chance to understand and complete the questionnaire.
Appendix 2 includes a link to the original Chinese version of the questionnaire distributed to the participants. To facilitate better communication of the current study through the international outlet, we have also translated these survey questions into English (see Appendix 2). A professional survey company was recruited in this study to distribute the survey questionnaire (see Appendix 2) and collect the data in China between October and November 2022 (we selected China as the research site since this research is supported and funded to investigate the information quality of healthcare conversational agents in the Chinese context). This survey company can reach nearly 300 million users each month as over 10 million people fill out surveys on the online survey platform 问卷星 (Wènjuàn xīng = Sojump) daily and conduct a precise survey targeting based on users’ interest tags and specific groups such as physicians or elderly people. The company commonly sends the survey links to its registered users and users of its cooperative partners through online survey platform, email, text messages, and social media. Respondents can fill out the questionnaire on their own computer or mobile device. In this study, we did not restrict the surveyed subjects based on users’ interest tags or specific groups. Hence, the company assisted us in targeting a diverse range of participants, including both users who might have used a healthcare conversational agent and those who have not. To incentivise subjects to answer the survey questions, participants were informed that they can enter a lucky draw to receive a 10 RMB WeChat red envelope, a discount coupon, or a gift card after completing the questionnaire. Participation in our survey was voluntary and participants were notified that they can change their mind at any time and stop completing the survey without consequences and the data gathered from the survey to be published in a form would not identify themselves. Since the survey company can approach a million users each month, it would have a better chance to widely recruit the subjects who are most likely to fill out our questionnaire. Determining the minimum sample size is always necessary before conducting a survey to avoid significant costs (Lakens, 2022). In this study, we prepared to accept a sampling error level of 10% in the online survey and a sample size of around 100 completed questionnaires was acceptable (Hill 1998; Weisberg et al., 1989). According to the funding and time limitations on data collection for this research project, the company needed two months to complete the data collection, targeting the minimum required sample size of this study. After the data collection process was completed, a total of 233 subjects returned the survey questionnaire.
By reviewing the answers from the participants, two questionnaires that the participants declined to fill in were removed. As a result, 231 questionnaires remained for data analysis (using SPSS) in this study. Table 2 presents the demographic information of the participants in the survey.
As shown in Table 2, in this study, 81 (35.1% of the participants) were male and 150 (64.9% of the participants) were female. Looking at the age of the participants, most of them are young and middle-aged. 122 (52.8% of the participants) aged 18 to less than 30 years old, and 80 (34.6% of the participants) aged 30 to less than 40 years old. For their education background, 160 (69.3% of the participants) received a bachelor’s degree, showing a higher education level of the study sample. In terms of their occupations, out of the participants, 119 (51.5% of the participants) were company employees, followed by 50 students (21.7% of the participants). Only one participant was identified as a farmer in the study.
Category | No. | Percentage |
---|---|---|
Sex | ||
Male | 81 | 35.1% |
Female | 150 | 64.9% |
Age | ||
18 years old =< Y < 30 years old | 122 | 52.8% |
30 years old =< Y < 40 years old | 80 | 34.6% |
40 years old =< Y < 50 years old | 21 | 9.1% |
Y>= 50 years old | 8 | 3.5% |
Education background | ||
Junior college or below | 48 | 20.7% |
Bachelor’s degree | 160 | 69.3% |
Master’s degree | 21 | 9.1% |
Doctor’s degree | 2 | 0.9% |
Occupation | ||
Students | 50 | 21.7% |
Public servants | 16 | 6.9% |
Company employees | 119 | 51.5% |
Professionals (e.g., physicians and lawyers) | 16 | 7.0% |
Freelancers | 19 | 8.2% |
Retirees | 3 | 1.3% |
Unemployed individuals | 7 | 3.0% |
Others (i.e., farmer) | 1 | 0.4% |
Total | 231 |
Table 2. Demographic information of the participants in this study
Among the participants, ninety-six (41.6% of the participants) reported that they had used a healthcare conversational agent, however, 135 (58.4% of the participants) had not (they can be potential users of the agents). For those participants who had used a healthcare conversational agent (n = 96), thirty-three (34.4% of the users) had used it less than six months and fourteen (14.6% of the users) had used it more than two years (see Figure 1), demonstrating that a healthcare conversational agent just started to be used in practice. As to these users, thirty-seven (38.5% of the users) used a healthcare conversational agent less than one time per month (e.g., one time per two or three months) and only one participant (1.1% of the users) applied the agent(s) many times per day, showing a low frequency of using a healthcare conversational agent in this survey (see Figure 2). Most participants in the study indicated their limited use or lack of prior experience with healthcare conversational agents, clearly highlighting their unfamiliarity with the agents. This might be due to many participants remaining hesitant, refraining from embracing this smart technology in healthcare due to their skepticism about the quality of the information provided by these agents. To alleviate this concern, the present study delved into the crucial dimensions of information quality perceived by users and potential users. Addressing these significant information quality issues could potentially draw more individuals to engage with these agents.
For the purposes of using the agent(s) and ways of accessing the agent(s), one participant may select one or more than one option in the survey. As to the purposes of using a healthcare conversational agent identified in this study (see Figure 3), seventy-six (79.2% of the users) utilised the agent(s) for the health self-test purpose, while sixteen (16.7% of the users) also employed the agent(s) to kill their time. Only one participant (1.1% of the users) took advantage of the agent(s) to buy medicines. Regarding the ways of accessing the agent(s) (see Figure 4), seventy-one (73.9% of the users) benefited from downloaded mobile phone apps, and fifty-five (57.3% of the users) were introduced by WeChat. Only one participant (1.1% of the users) accessed a healthcare conversational agent by visiting other applications. These ways of accessing the agent(s) are described in Table 3. Particularly, the preference for seeking healthcare information from the internet has now surpassed that of television channels, and WeChat, Weibo, and QQ stand out as the most widely used internet-based social platforms in China (Hu, 2022). Leveraging these internet-based social platforms to introduce a healthcare conversational agent presents an opportunity to effectively promote its usage within the wider public.
Figure 1. Years of using a healthcare conversational agent identified in this survey
Figure 2. Frequency of using a healthcare conversational agent identified in this survey
Figure 3. Purposes of using a healthcare conversational agent identified in this survey
Figure 4. Ways of accessing a healthcare conversational agent identified in this survey
Way of accessing a healthcare conversational agent | Development company | Types of services | How to introduce a healthcare conversational agent |
---|---|---|---|
Websites | Healthcare provider | Provide medical guidance | e.g., show a conversational user interface when users accessing a website |
Mobile phone apps | App developer | Provide online medical consultation and advisory services | e.g., show a conversational user interface when users opening an app |
Tencent | Enable instant messaging, photo sharing, text and link sharing in Moments, and publish articles, news, educational content, and more through official WeChat accounts | e.g., include a healthcare conversational agent in the applet of WeChat, and share its code or its access link | |
Sina | Publish text, images, videos, and various other forms of content and enable advertising promotion, private messaging, and chat functionalities | e.g., share the code or the link of a healthcare conversational agent | |
Tencent | Enable instant messaging, photo sharing, and text and link sharing, and allow to get the latest news, articles and media content from QQ News, QQ Reading and other features | e.g., share the code or the link of a healthcare conversational agent | |
Tangible robots | Robot developer | Provide real-time medical consultation and advisory services | e.g., place a robot at the entrance of libraries and healthcare settings |
Table 3. A summary of the ways of accessing a healthcare conversational agent
Table 4 presents the importance of the information quality dimensions of healthcare conversational agents across all participants (n = 231). Understandability and trustworthiness were the two top concerns for the information quality of the agents from the participants’ perspective in this study. For sponsors and developers of healthcare conversational agents, these two information quality dimensions can be considered as important requirements and indicators for their products’ design and evaluation. However, repetitiveness was a less important information quality dimension perceived by the surveyed participants.
Information quality dimension | Number | Number of responses (percent) | Median | Mean (SDa) | Rank | ||||
---|---|---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | |||||
Accuracy | 231 | 3 (1.3) |
8 (3.5) |
35 (15.2) |
124 (53.7) |
61 (26.4) |
4.00 | 4.00 (0.821) | 5 |
Appropriateness | 231 | 3 (1.3) |
6 (1.3) |
55 (1.3) |
121 (1.3) |
46 (1.3) |
4.00 | 3.87 (0.802) | 9 |
Empathy | 231 | 3 (1.3) |
21 (9.1) |
71 (30.7) |
94 (40.7) |
42 (18.2) |
4.00 | 3.65 (0.924) | 14 |
Timeliness | 231 | 4 (1.7) |
4 (1.7) |
55 (23.8) |
104 (45.0) |
64 (27.7) |
4.00 | 3.95 (0.861) | 7 |
Completeness | 231 | 6 (2.6) |
2 (0.9) |
33 (14.3) |
120 (51.9) |
70 (30.3) |
4.00 | 4.06 (0.844) | 3 |
Consistency | 231 | 3 (1.3) |
9 (2.6) |
46 (19.9) |
104 (45.0) |
69 (29.9) |
4.00 | 3.98 (0.880) | 6 |
Helpfulness | 231 | 2 (0.9) |
8 (3.5) |
44 (19.0) |
98 (42.4) |
79 (34.2) |
4.00 | 4.06 (0.865) | 3 |
Clarity | 231 | 3 (1.3) |
6 (2.6) |
43 (18.6) |
104 (45.0) |
75 (32.5) |
4.00 | 4.05 (0.856) | 4 |
Trustworthiness | 231 | 3 (1.3) |
15 (6.5) |
31 (13.4) |
93 (40.3) |
89 (38.5) |
4.00 | 4.08 (0.945) | 2 |
Relevance | 231 | 4 (1.7) |
10 (4.3) |
68 (29.4) |
95 (41.1) |
54 (23.4) |
4.00 | 3.80 (0.906) | 12 |
Repetitiveness | 231 | 7 (3.0) |
28 (12.1) |
87 (37.7) |
81 (35.1) |
28 (12.1) |
3.00 | 3.41 (0.955) | 16 |
Accessibility | 231 | 4 (1.7) |
10 (4.3) |
65 (28.1) |
98 (42.4) |
54 (23.4) |
4.00 | 3.81 (0.902) | 11 |
Conciseness | 231 | 9 (3.9) |
21 (9.1) |
74 (32.0) |
85 (36.8) |
42 (18.2) |
4.00 | 3.56 (1.015) | 15 |
Realisability | 231 | 4 (1.7) |
13 (5.6) |
51 (22.1) |
100 (43.3) |
63 (27.3) |
4.00 | 3.89 (0.930) | 8 |
Understandability | 231 | 4 (1.7) |
5 (2.2) |
39 (16.9) |
99 (42.9) |
84 (36.4) |
4.00 | 4.10 (0.877) | 1 |
Likability | 231 | 6 (2.6) |
13 (5.6) |
70 (30.3) |
86 (37.2) |
56 (24.2) |
4.00 | 3.75 (0.972) | 13 |
Amount of information | 231 | 8 (3.5) |
11 (4.8) |
48 (20.8) |
108 (46.8) |
56 (24.2) |
4.00 | 3.84 (0.964) | 10 |
SDa : standard deviation |
Table 4. The importance of the information quality dimensions of healthcare conversational agents from the perspective of the participants in this study
To determine whether the participants had used a healthcare conversational agent or not would affect their perceived importance of the IQ dimensions of the agent(s), we also compared the means between these two groups by one-way ANOVA. Results show that this experience of using the agent(s) affected their perceived importance about completeness, consistency, helpfulness, clarity, trustworthiness, realisability, and understandability but not the other IQ dimensions (see Table 5). From the side of the participants who had used the agent(s), completeness and trustworthiness were the top two dimensions for them to assess the IQ of the agent(s), while for those who had not used the agent(s) (they can be potential users of the agents), understandability and clearness were the top two critical IQ dimensions when they considered to apply the agent(s).
Information quality dimensions | The participants who had used a healthcare conversational agent (n = 96) |
The participants who had not used a healthcare conversational agent (n = 135) |
F-value (Sig.) | ||
---|---|---|---|---|---|
Mean (SDa) | Rank | Mean (SDa) | Rank | ||
Accuracy | 4.17 (0.675) | 6 | 3.89 (0.895) | 4 | 6.580* |
Appropriateness | 3.95 (0.786) | 9 | 3.81 (0.812) | 8 | 1.547 |
Empathy | 3.66 (0.868) | 13 | 3.65 (0.964) | 12 | 0.001 |
Timeliness | 4.07 (0.714) | 8 | 3.87 (0.945) | 6 | 3.251 |
Completeness | 4.32 (0.703) | 1 | 3.88 (0.890) | 5 | 16.353*** |
Consistency | 4.17 (0.763) | 6 | 3.85 (0.935) | 7 | 7.387** |
Helpfulness | 4.24 (0.791) | 4 | 3.93 (0.895) | 3 | 7.580** |
Clarity | 4.19 (0.786) | 5 | 3.95 (0.892) | 2 | 4.454* |
Trustworthiness | 4.30 (0.809) | 2 | 3.93 (1.005) | 3 | 9.205** |
Relevance | 3.94 (0.818) | 10 | 3.70 (0.955) | 11 | 3.780 |
Repetitiveness | 3.40 (0.978) | 15 | 3.42 (0.942) | 14 | 0.043 |
Accessibility | 3.89 (0.832) | 11 | 3.76 (0.948) | 9 | 1.035 |
Conciseness | 3.60 (1.010) | 14 | 3.53 (1.021) | 13 | 0.272 |
Realisability | 4.10 (0.827) | 7 | 3.73 (0.971) | 10 | 9.233** |
Understandability | 4.26 (0.757) | 3 | 3.99 (0.938) | 1 | 5.643* |
Likability | 3.78 (0.976) | 12 | 3.73 (0.973) | 10 | 0.181 |
Amount of information | 3.94 (0.938) | 10 | 3.76 (0.979) | 9 | 1.846 |
SDa: standard deviation ***: p < 0.001; **: p <0.01; *: p < 0.05 |
Table 5. ANOVA results for participants who had used or had not used a healthcare conversational agent in this study
In this study, we identified the critical dimensions used to investigate the information quality of conversational agents in the healthcare context by conducting a survey in China. The results of the analysis of the data collected in the survey indicate that understandability and trustworthiness were the critical information quality dimensions of the agents from the participants’ perspective, assisting in addressing our proposed research question. The relevant importance of all identified information quality dimensions was provided in Table 4.
In the survey, understandability was considered as the top critical information quality dimension for healthcare conversational agents. This may be due to that individuals can read and understand the information provided by the agents, serving as the foundation to utilise relevant services. The provided information can be understood is an important aspect for the creation of conversational agents (Kasirzadeh and Gabriel, 2023). We also confirm this in the context of healthcare conversational agents in this study. A high level of the agents’ information understandability usually contains the following items: (a) the language of the information provided by an agent is understandable (Denecke et al., 2018), (b) the artificial intelligence-generated responses provided by an agent are easy to understand, and (c) the meaning of the information provided by an agent is easy to comprehend (Thompson et al., 2019). These three items therefore should be considered when developing the agents and assessing the information understandability of the agents, to meet individuals’ requirements.
Meanwhile, participants were also concerned more about the trustworthiness of the information provided by agents. This may be attributed to the fact that the level of trustworthiness associated with the information plays a pivotal role in influencing the decision to use the information or services, subsequently impacting the adoption and the continued utilisation of the agents. Our findings align with design principles (e.g., information trustworthiness) of conversational agents for information seeking highlighted in Stieglitz et al. (2022), in the context of healthcare conversational agents. To achieve a high level of trustworthiness, the artificial intelligence-generated responses of the agents should be reliable, believable, and free of bias (Goh et al., 2021; Kang and Wei, 2018). This information needs to be supported by references and appropriate advertisements, and the authorship of the information should be clearly stated (Goh et al., 2021).
Furthermore, we compared the differences in the perceived importance on these information quality dimensions between the two groups (i.e., the participants who had or not applied a healthcare conversational agent) as shown in Table 5. The users of the agents considered that completeness was more important than other dimensions for assessing the agents’ information quality, while for potential users of the agents, they tended to focus on clarity. These findings also reveal that completeness and clarity are two critical information quality dimensions when designing a healthcare conversational agent. This is consistent with previous studies (Mingotto et al., 2021; Riefle et al., 2022) which have pointed out individuals’ requirements for complete and clear information from conversational agents, within the scope of this study. However, the reasons of causing the differences in the perceived importance on these two information quality dimensions between the two groups need to be further explored based on in-depth interviews and studies with the participants.
This study has implications for both academics and practices. We also propose potential areas for future research in this field. See Table 6.
Findings of the study | Implications for academics | Implications for practices | Potential research areas |
---|---|---|---|
User behavioural characteristics of using the agents | Reveal that the use of the agents remains in their infancy (see Figure 1) with a low usage frequency (see Figure 2) | Promote the healthcare education function of the agents to disseminate scientific health knowledge (see Figure 3) and provide multiple ways of accessing the agents (see Figure 4) | Investigate the factors influencing users’ behaviours, assisting in better developing and improving the agents to meet users’ needs and facilitate their use |
Results of relevant importance of the agents’ information quality dimensions | Add the critical dimensions used to assess the agents’ information quality into the existing literature | Provide the items needed to be considered in the development and evaluation of the agents (see Discussion) | Investigate the critical information quality dimensions of the agents from the experts’ side and compare with the results of this study to learn the differences |
Differences in relevant importance of the agents’ information quality dimensions between users and potential users of the agents’ | Show that completeness was the top concern of the agents’ information quality for the users, while for the potential users, they focused on clarity | Call for (1) setting priorities for ensuring the agents’ information quality to address users and potential users’ information quality requirements (e.g., completeness and clarity); and (2) educating the public to improve their technical and information literacy on the agents | Explore the reasons of causing the differences between the two groups by in-depth interviews or experiments to identify problem areas for addressing information quality |
Table 6. Implications for academics and practices in this study
Our findings on user behavioural characteristics of using the agents reveal that the utilisation of healthcare conversational agents is still in its early stages, with a low usage frequency. For academics, investigating the factors influencing user behaviours assists in the development and enhancement of these agents to align with user needs and preferences. On the practical front, sponsors and developers are encouraged (1) to promote the educational aspects of healthcare conversational agents to facilitate the dissemination of health information and (2) to provide a range of access methods to promote their products.
This study proposes adding the critical dimensions used to evaluate the agents’ information quality from the perspective of both users and potential users into existing literature, based on the results of relevant importance of the agents’ information quality dimensions. Academically, these findings emphasise the importance of including these information quality dimensions in the agents’ development and evaluation. In practice, these dimensions identified in this study can serve as guidelines for assessing and improving agents’ information quality.
The findings about the differences in relevant importance of the agents’ information quality dimensions between users and potential users indicate that users prioritised the completeness dimension of information quality when they were using the agents, while potential users emphasised information clarity. Academics are urged to prioritise these information quality dimensions when studying and developing healthcare agents and explore reasons for differences between user groups through further research. For practical applications, there is a call (1) to prioritise addressing information quality issues and (2) to educate the public to enhance technical and information literacy about healthcare conversational agents. Accordingly, a growing number of individuals are inclined to embrace and utilise the agents.
This study has made several contributions, while it still includes a few limitations. First, the information quality dimensions of healthcare conversational agents used in the survey were identified from a systematic review and we may have missed other potential information quality dimensions used to assess the agents. Future research is encouraged to include more relevant information quality dimensions from both users and experts. Furthermore, based on the experts’ viewpoints, a system of information quality dimensions used to assess the agents can be established with weighted coefficients, contributing to better scientific evaluation of the agents’ information quality. Secondly, the present study was conducted based on the data collected in China. This convenience sample of a limited group of users and potential users who volunteered to complete the survey might bias the results. Researchers are thus recommended to conduct larger studies that involve more participants across geographies and countries, to better understand this phenomenon. Thirdly, this study included the participants who were not familiar with a healthcare conversational agent as the study sample. Although it is essential to consider the potential biases that might arise from exclusively targeting knowledgeable respondents, focusing on the respondents who possess a baseline understanding of healthcare conversational agents could potentially provide deeper insights. Future research is encouraged to consider selectively enlisting users who possess prior knowledge about the agents to augment the accuracy and pertinence of the study’s conclusions regarding the pivotal dimensions of information quality related to the agents, thereby revealing interesting findings compared with this study. Lastly, the study’s conclusions were derived from the participants’ responses, which were shaped by their perceptions, experiences and memories. Although we described each information quality dimension in a simple way with examples in the survey instrument development to help participants better understand these dimensions and ensure that the collected data reflects their actual perceptions and experiences, the responses from these participants may have been influenced by their recall bias and ability to discern the exact conceptions about the information quality dimensions. That is to say, the participants may not be able to accurately recall the experiences of using a healthcare conversational agent that occurred over time, nor accurately understand the concepts related to these information quality dimensions. Further verification and clarification of the results thus should be sought from the participants.
The present study identifies the critical information quality dimensions of healthcare conversational agents in a survey of subjects in China for addressing the research question and compares the differences on the relevant importance of these dimensions between users and potential users of the agents. The results of the study show that understandability and trustworthiness were the two top concerns for the agents’ information quality from the participants’ perspective in this study (in Table 4). Furthermore, having or not having the experience of using a healthcare conversational agent affected the participants’ perceived importance about completeness, consistency, helpfulness, clarity, trustworthiness, realisability, and understandability but not the other information quality dimensions (see Table 5). These conclusions were derived from the analysis of the responses provided by the recruited participants in this study, whose viewpoints may have been influenced by their recollections of using a healthcare conversational agent and ability to accurately understand the information quality dimensions. Further verification and clarification of the results should be sought from the participants in future research. The study also provides implications for both academics and practitioners and proposes future research possibilities (see Table 6), based on the research findings.
This research was supported and funded by the Humanities and Social Sciences Youth Foundation, Ministry of Education of the People’s Republic of China (Grant No.21YJC870009). The work of Chaowang Lan is supported and funded by Guangxi Science and Technology Base and Talent Special Fund (Grant No. 2021AC19207).
Caihua Liu is an Associate Professor at the School of Artificial Intelligence, Guangxi Colleges and Universities Key Laboratory of AI Algorithm Engineering, Guilin University of Electronic Technology, China. She can be contacted at Caihua.Liu@guet.edu.cn.
Guochao Peng is based at the School of Information Management at Sun Yat-sen University, China. He holds a BSc in Information Management (1st Class Honours) and a PhD in Information Systems (IS), both from the University of Sheffield. Prof. Peng has over 90 publications in the IS field. He is the co-founder and co-chair of the IADIS International Conference on Information Systems Post-Implementation and Change Management (ISPCM) since 2012. He has also conducted peer review of submissions to more than 15 leading IS journals and international conferences. He can be contacted at penggch@mail.sysu.edu.cn.
Shufeng Kong is an Associate Professor with the School of Software Engineering at Sun Yat-sen University, China. Before joining Sun Yat-sen University, Dr. Kong worked as a Research Fellow at Cornell University, USA and Nanyang Technological University, Singapore, successively. He was also a visiting research scholar at UC Irvine in 2018. He served for the program committee of many international conferences such as AAAI and IJCAI. He can be contacted at kongshf@mail.sysu.edu.cn.
Chaowang Lan (corresponding author) is a Lecturer at the School of Artificial Intelligence, Guangxi Colleges and Universities Key Laboratory of AI Algorithm Engineering, Guilin University of Electronic Technology, China. He can be contacted at chaowanglan@guet.edu.cn. Dr. Lan is the corresponding author of this paper.
Haoliang Zhu is an Associate Professor of the School of Intelligent Manufacturing at Nanning University. His research on algorithms for machines has been funded by provincial and municipal science and technology authorities. Assoc. Prof. Zhu has hosted 10+ research projects as well as scientific and technological key projects. He can be contacted at zhuhaoliang@nnxy.edu.cn.
Abd-Alrazaq, A., Safi, Z., Alajlani, M., Warren, J., Househ, M., & Denecke, K. (2020). Technical metrics used to evaluate health care chatbots: scoping review. Journal of Medical Internet Research, 22(6), e18301. https://doi.org/10.2196/18301
Abdullah, A. S., Gaehde, S., & Bickmore, T. (2018). A tablet based embodied conversational agent to promote smoking cessation among veterans: a feasibility study. Journal of Epidemiology and Global Health, 8(3-4), 225-230. https://doi.org/10.2991/j.jegh.2018.08.104
Ashish, V.P., Saini, D. (2020). Would you trust a bot for healthcare advice? An empirical investigation. Proceedings of the 24th Pacific Asia Conference on Information Systems (p. 62). Association for Information Systems. https://aisel.aisnet.org/pacis2020/62
Balsa, J., Félix, I., Cláudio, A. P., Carmo, M. B., e Silva, I. C., Guerreiro, A., Guedes, M., Henriques, A., & Guerreiro, M. P. (2020). Usability of an intelligent virtual assistant for promoting behavior change and self-care in older people with type 2 diabetes. Journal of Medical Systems, 44(7), 1-12. https://doi.org/10.1007/s10916-020-01583-w
Barnett, A., Savic, M., Pienaar, K., Carter, A., Warren, N., Sandral, E., Manning, V., & Lubman, D. I. (2021). Enacting ‘more-than-human’care: clients’ and counsellors’ views on the multiple affordances of chatbots in alcohol and other drug counselling. International Journal of Drug Policy, 94, 102910. https://doi.org/10.1016/j.drugpo.2020.102910
Bérubé, C., Schachner, T., Keller, R., Fleisch, E., v Wangenheim, F., Barata, F., & Kowatsch, T. (2021). Voice-based conversational agents for the prevention and management of chronic and mental health conditions: systematic literature review. Journal of Medical Internet Research, 23(3), e25933. https://doi.org/10.2196/25933
Beilharz, F., Sukunesan, S., Rossell, S. L., Kulkarni, J., & Sharp, G. (2021). Development of a positive body image chatbot (KIT) with young people and parents/carers: qualitative focus group study. Journal of Medical Internet Research, 23(6), e27807. https://doi.org/10.2196/27807
Boczar, D., Sisti, A., Oliver, J. D., Helmi, H., Restrepo, D. J., Huayllani, M. T., Spaulding, A. C., Carter, R., Rinker, B.D., & Forte, A. J. (2020). Artificial intelligent virtual assistant for plastic surgery patient's frequently asked questions: a pilot study. Annals of Plastic Surgery, 84(4), e16-e21. https://doi.org/10.1097/SAP.0000000000002252
Chen, J., Lyell, D., Laranjo, L., & Magrabi, F. (2020). Effect of speech recognition on problem solving and recall in consumer digital health tasks: controlled laboratory experiment. Journal of Medical Internet Research, 22(6), e14827. https://doi.org/10.2196/14827
Comendador, B. E. V., Francisco, B. M. B., Medenilla, J. S., & Mae, S. (2015). Pharmabot: a pediatric generic medicine consultant chatbot. Journal of Automation and Control Engineering, 3(2), 137-140. https://doi.org/10.12720/joace.3.2.137-140
Damij, N., & Bhattacharya, S. (2022). The role of AI chatbots in mental health related public services in a (post) pandemic world: a review and future research agenda. Proceedings of the 2022 IEEE Technology and Engineering Management Conference (TEMSCON EUROPE)(pp.152-159). https://doi.org/10.1109/TEMSCONEUROPE54743.2022.9801962
Danda, P., Srivastava, B. M. L., & Shrivastava, M. (2016). Vaidya: a spoken dialog system for health domain. Proceedings of the 13th International Conference on Natural Language Processing (pp. 161-166). NLP Association of India. https://aclanthology.org/W16-6321 (archived at the Internet Archive (https://web.archive.org/web/20210815201134/https://aclanthology.org/W16-6321.pdf))
Denecke, K., Hochreutener, S. L., Pöpel, A., & May, R. (2018). Self-anamnesis with a conversational user interface: concept and usability study. Methods of Information in Medicine, 57(05/06), 243-252. https://doi.org/10.1055/s-0038-1675822
Elmasri, D., & Maeder, A. (2016). A conversational agent for an online mental health intervention. Proceedings of the International Conference on Brain Informatics (pp.243-251). Springer. https://doi.org/10.1007/978-3-319-47103-7_24
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Mental Health, 4(2), e7785. https://doi.org/10.2196/mental.7785
Gaffney, H., Mansell, W., & Tai, S. (2020). Agents of change: understanding the therapeutic processes associated with the helpfulness of therapy for mental health problems with relational agent MYLO. Digital Health, 6, 2055207620911580. https://doi.org/10.1177/2055207620911580
Griol, D., & Callejas, Z. (2016). Mobile conversational agents for context-aware care applications. Cognitive Computation, 8(2), 336-356. https://doi.org/10.1007/s12559-015-9352-x
Goh, A. S. Y., Wong, L. L., & Yap, K. Y.-L. (2021). Evaluation of COVID-19 information provided by digital voice assistants. International Journal of Digital Health, 1(1), 1-11. https://doi.org/10.29337/ijdh.25
Goldenthal, S. B., Portney, D., Steppe, E., Ghani, K., & Ellimoottil, C. (2019). Assessing the feasibility of a chatbot after ureteroscopy. Mhealth, 5(8), 1-5. https://doi.org/10.21037/mhealth.2019.03.01
Haque, M. R., & Rubya, S. (2023). An overview of chatbot-based mobile mental health apps: insights from app description and user reviews. JMIR mHealth and uHealth, 11(1), e44838. https://doi.org/10.2196/44838
Hill, R. (1998). What sample size is “enough” in internet survey research? Interpersonal Computing and Technology: An Electronic Journal for the 21st Century, 6(3-4), 1-12.
Huang, D.-H., & Chueh, H.-E. (2021). Chatbot usage intention analysis: veterinary consultation. Journal of Innovation & Knowledge, 6(3), 135-144. https://doi.org/10.1016/j.jik.2020.09.002
Hu, M. (2022). 健康中国战略下的媒体行动和传播创新 (Jiànkāng zhōngguó zhànlüè xià de méitǐ xíngdòng hé chuánbò chuàngxīn = Media actions and communication innovations under the Healthy China Strategy. 传媒 (Chuánméi = Media), 22 (66-68). http://qikan.cqvip.com/Qikan/Article/Detail?id=7108427618
Jadhav, K. P. & Thorat, S. A. (2020). Towards designing conversational agent systems. Computing in Engineering and Technology, 1025(533-542). https://doi.org/10.1007/978-981-32-9515-5_51
Kang, J., & Wei, L. (2018). "Give me the support I want!" The effect of matching an embodied conversational agent's social support to users' social support needs in fostering positive user-agent interaction. Proceedings of the 6th International Conference on Human-Agent Interaction (pp. 106-113). The Association for Computing Machinery. https://doi.org/10.1145/3284432.3284462
Kasirzadeh, A., & Gabriel, I. (2023). In conversation with Artificial Intelligence: aligning language models with human values. Philosophy & Technology, 36(2), 1-24. https://doi.org/10.1007/s13347-023-00606-x
Khan, A. & Qayyum, A. (2019). Investigating the elements of supervision in library and information science practicums: a systematic literature review. In Proceedings of RAILS - Research Applications, Information and Library Studies, 2018, Faculty of Information Technology, Monash University, Australia, 28-30 November 2018. Information Research, 24(3), paper rails1813. https://informationr.net/ir/24-3/rails/rails1813.html
Kocaballi, A. B., Quiroz, J. C., Rezazadegan, D., Berkovsky, S., Magrabi, F., Coiera, E., & Laranjo, L. (2020). Responses of conversational agents to health and lifestyle prompts: investigation of appropriateness and presentation structures. Journal of Medical Internet Research, 22(2), e15823. https://doi.org/10.2196/15823
Lakens, D. (2022). Sample size justification. Collabra: Psychology, 8(1), 33267. https://doi.org/10.1525/collabra.33267
Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A. Y. S. & Coiera, E. (2018). Conversational agents in healthcare: a systematic review. Journal of the American Medical Informatics Association, 25(9), 1248-1258. https://doi.org/10.1093/jamia/ocy072
Lee, Y. W., Strong, D. M., Kahn, B. K., & Wang, R. Y. (2002). AIMQ: a methodology for information quality assessment. Information & Management, 40(2), 133-146. https://doi.org/10.1016/S0378-7206(02)00043-5
Leedy, P. D. & Ormrod, J. E. (2019). Practical research: planning and design. Pearson Education Limited.
Ling, E. C., Tussyadiah, I., Tuomi, A., Stienmetz, J., & Ioannou, A. (2021). Factors influencing users’ adoption and use of conversational agents: a systematic review. Psychology & Marketing, 38(7), 1031-1051. https://doi.org/10.1002/mar.21491
Liu, C., Zhang, B., & Peng, G. (2021). A systematic review of information quality of artificial intelligence based conversational agents in healthcare. Proceedings of the 23rd International Conference on Human-Computer Interaction, 12782:331-347. Springer International Publishing. https://doi.org/10.1007/978-3-030-77015-0_24
Liu, C., Zowghi, D., Peng, G., & Kong, S. (2023). Information quality of conversational agents in healthcare. Information Development, 02666669231172434. https://doi.org/10.1177/02666669231172434
Ly, K. H., Ly, A.-M., & Andersson, G. (2017). A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interventions, 10, 39-46. https://doi.org/10.1016/j.invent.2017.10.002
Mariani, M. M., Hashemi, N., & Wirtz, J. (2023). Artificial intelligence empowered conversational agents: a systematic literature review and research agenda. Journal of Business Research, 161, 113838. https://doi.org/10.1016/j.jbusres.2023.113838
Mingotto, E., Montaguti, F., & Tamma, M. (2021). Challenges in re-designing operations and jobs to embody AI and robotics in services. Findings from a case in the hospitality industry. Electronic Markets, 31, 493-510. https://doi.org/10.1007/s12525-020-00439-y
Mujeeb, S., Hafeez, M., & Arshad, T. (2017). Aquabot: a diagnostic chatbot for achluophobia and autism. International Journal of Advanced Computer Science and Applications, 8(9), 209-216. https://doi.org/10.14569/IJACSA.2017.080930
Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2019). Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digital Health, 5, 1-12. https://doi.org/10.1177/2055207619871808
Palanica, A., Flaschner, P., Thommandram, A., Li, M., & Fossat, Y. (2019). Physicians’ perceptions of chatbots in health care: cross-sectional web-based survey. Journal of Medical Internet Research, 21(4), e12887. https://doi.org/10.2196/12887
Riefle, L., Brand, A., Mietz, J., Rombach, L., Szekat, C., & Benz, C. (2022). What fits Tim might not fit Tom: exploring the impact of user characteristics on users’ experience with conversational interaction modalities. Proceedings of 17th International Conference on Wirtschaftsinformatik, 13:1-10. Association for Information Systems. https://aisel.aisnet.org/wi2022/hci/hci/13
Rose-Davis, B., Van Woensel, W., Stringer, E., Abidi, S., & Abidi, S. S. R. (2019). Using an artificial intelligence-based argument theory to generate automated patient education dialogues for families of children with juvenile idiopathic arthritis. Proceedings of the 17th World Congress on Medical and Health Informatics (pp.1337-1341). IOS Press. https://doi.org/10.3233/SHTI190444
Setia, P., Setia, P., Venkatesh, V., & Joglekar, S. (2013). Leveraging digital technologies: how information quality leads to localized capabilities and customer service performance. MIS Quarterly, 37(2), 565-590. https://doi.org/10.25300/MISQ/2013/37.2.11
Stieglitz, S., Hofeditz, L., Brünker, F., Ehnis, C., Mirbabaie, M., & Ross, B. (2022). Design principles for conversational agents to support Emergency Management Agencies. International Journal of Information Management, 63, 102469. https://doi.org/10.1016/j.ijinfomgt.2021.102469
Sun, Y., Zhang, Y., Gwizdka, J., & Trace, C. B. (2019). Consumer evaluation of the quality of online health information: systematic literature review of relevant criteria and indicators. Journal of Medical Internet Research, 21(5), e12522. https://doi.org/10.2196/12522
Thompson, D., Callender, C., Gonynor, C., Cullen, K. W., Redondo, M. J., Butler, A., & Anderson, B. J. (2019). Using relational agents to promote family communication around type 1 diabetes self-management in the diabetes family teamwork online intervention: longitudinal pilot study. Journal of Medical Internet Research, 21(9), e15318. https://doi.org/10.2196/15318
Tudor Car, L., Dhinagaran, D. A., Kyaw, B. M., Kowatsch, T., Joty, S., Theng, Y. L., & Atun, R. (2020). Conversational agents in health care: scoping review and conceptual analysis. Journal of Medical Internet Research, 22(8), e17158.https://doi.org/10.2196/17158
van Heerden, A., Ntinga, X., & Vilakazi, K. (2017). The potential of conversational agents to provide a rapid HIV counseling and testing services. Proceedings of International Conference on the Frontiers and Advances in Data Science (pp. 80-85). IEEE. https://doi.org/10.1109/FADS.2017.8253198
Van Pinxteren, M. M., Pluymaekers, M., & Lemmink, J. G. (2020). Human-like communication in conversational agents: a literature review and research agenda. Journal of Service Management, 31(2), 203-225. https://doi.org/10.1108/JOSM-06-2019-0175
Wang, L., Wang, D., Tian, F., Peng, Z., Fan, X., Zhang, Z., Yu, M., Ma, X., & Wang, H. (2021). Cass: towards building a social-support chatbot for online health community. Proceedings of the ACM on Human-Computer Interaction (pp. 1-31). ACM Press. https://doi.org/10.1145/3449083
Weisberg, H. F., Krosnick, J. A., & Bowen, B. D. (1989). An introduction to survey research and data analysis. Scott, Foresman & Co.
Yang, S., Lee, J., Sezgin, E., Bridge, J., & Lin, S. (2021). Clinical advice by voice assistants on postpartum depression: cross-sectional investigation using Apple Siri, Amazon Alexa, Google Assistant, and Microsoft Cortana. JMIR mHealth and uHealth, 9(1), e24045. https://doi.org/10.2196/24045
Study ID | Title of the study |
---|---|
S1 | Nadarzynski, T., Miles, O., Cowie, A., & Ridge, D. (2019). Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digital Health, 5, 1-12. |
S2 | Chaix, B., Bibault, J.-E., Pienkowski, A., Delamon, G., Guillemassé, A., Nectoux, P., & Brouard, B. (2019). When chatbots meet patients: one-year prospective study of conversations between patients with breast cancer and a chatbot. JMIR Cancer, 5(1), e12856. https://doi.org/10.2196/12856 |
S3 | Denecke, K., Hochreutener, S. L., Pöpel, A., & May, R. (2018). Self-anamnesis with a conversational user interface: concept and usability study. Methods of Information in Medicine, 57(05/06), 243-252. https://doi.org/10.1055/s-0038-1675822 |
S4 | Bian, Y., Xiang, Y., Tong, B., Feng, B., & Weng, X. (2020). Artificial intelligence–assisted system in postoperative follow-up of orthopedic patients: exploratory quantitative and qualitative study. Journal of Medical Internet Research, 22(5), e16896. https://doi.org/10.2196/16896 |
S5 | Chen, J., Lyell, D., Laranjo, L., & Magrabi, F. (2020). Effect of speech recognition on problem solving and recall in consumer digital health tasks: controlled laboratory experiment. Journal of Medical Internet Research, 22(6), e14827. https://doi.org/10.2196/14827 |
S6 | Barnett, A., Savic, M., Pienaar, K., Carter, A., Warren, N., Sandral, E., Manning, V., & Lubman, D. I. (2021). Enacting ‘more-than-human’care: Clients’ and counsellors’ views on the multiple affordances of chatbots in alcohol and other drug counselling. International Journal of Drug Policy, 94, 102910. https://doi.org/10.1016/j.drugpo.2020.102910 |
S7 | Rose-Davis, B., Van Woensel, W., Stringer, E., Abidi, S., & Abidi, S. S. R. (2019). Using an artificial intelligence-based argument theory to generate automated patient education dialogues for families of children with juvenile idiopathic arthritis. Proceedings of the 17th World Congress on Medical and Health Informatics (pp.1337-1341). IOS Press. https://doi.org/10.3233/SHTI190444 |
S8 | Boczar, D., Sisti, A., Oliver, J. D., Helmi, H., Restrepo, D. J., Huayllani, M. T., Spaulding, A. C., Carter, R., Rinker, B.D., & Forte, A. J. (2020). Artificial intelligent virtual assistant for plastic surgery patient's frequently asked questions: a pilot study. Annals of Plastic Surgery, 84(4), e16-e21. https://doi.org/10.1097/SAP.0000000000002252 |
S9 | Gaffney, H., Mansell, W., & Tai, S. (2020). Agents of change: Understanding the therapeutic processes associated with the helpfulness of therapy for mental health problems with relational agent MYLO. Digital Health, 6, 2055207620911580. https://doi.org/10.1177/2055207620911580 |
S10 | Palanica, A., Flaschner, P., Thommandram, A., Li, M., & Fossat, Y. (2019). Physicians’ perceptions of chatbots in health care: cross-sectional web-based survey. Journal of Medical Internet Research, 21(4), e12887. https://doi.org/10.2196/12887 |
S11 | Crutzen, R., Peters, G.-J. Y., Portugal, S. D., Fisser, E. M., & Grolleman, J. J. (2011). An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: an exploratory study. Journal of Adolescent Health, 48(5), 514-519. https://doi.org/10.1016/j.jadohealth.2010.09.002 |
S12 | Tanana, M. J., Soma, C. S., Srikumar, V., Atkins, D. C., & Imel, Z. E. (2019). Development and evaluation of ClientBot: patient-like conversational agent to train basic counseling skills. Journal of Medical Internet Research, 21(7), e12529. https://doi.org/10.2196/12529 |
S13 | Hussain, S., & Athula, G. (2018). Extending a conventional chatbot knowledge base to external knowledge source and introducing user based sessions for diabetes education. Proceedings of the 32nd International Conference on Advanced Information Networking and Applications Workshops (pp.698-703). IEEE. https://doi.org/10.4018/978-1-5225-6023-4.ch015 |
S14 | Ralston, K., Chen, Y., Isah, H., & Zulkernine, F. (2019). A voice interactive multilingual student support system using IBM Watson. Proceedings of 18th International Conference on Machine Learning and Applications (pp. 1924-1929). IEEE. https://doi.org/10.1109/ICMLA.2019.00309 |
S15 | Chatzimina, M., Koumakis, L., Marias, K., & Tsiknakis, M. (2019). Employing conversational agents in palliative care: A feasibility study and preliminary assessment. Proceedings of the 19th International Conference on Bioinformatics and Bioengineering (pp. 489-496). IEEE. https://doi.org/10.1109/BIBE.2019.00095 |
S16 | van Heerden, A., Ntinga, X., & Vilakazi, K. (2017). The potential of conversational agents to provide a rapid HIV counseling and testing services. Proceedings of International Conference on the Frontiers and Advances in Data Science (pp. 80-85). IEEE. https://doi.org/10.1109/FADS.2017.8253198 |
S17 | Balsa, J., Félix, I., Cláudio, A. P., Carmo, M. B., e Silva, I. C., Guerreiro, A., Guedes, M., Henriques, A., & Guerreiro, M. P. (2020). Usability of an intelligent virtual assistant for promoting behavior change and self-care in older people with type 2 diabetes. Journal of Medical Systems, 44(7), 1-12. https://doi.org/10.1007/s10916-020-01583-w |
S18 | Ashish, V.P., & Saini, D. (2020). Would you trust a bot for healthcare advice? An empirical investigation. Proceedings of the 24th Pacific Asia Conference on Information Systems (p. 62). Association for Information Systems. https://aisel.aisnet.org/pacis2020/62 |
S19 | Giorgino, T., Quaglini, S., & Stefanelli, M. (2004). Evaluation and usage patterns in the homey hypertension management dialog system. Proceedings of AAAI Fall Symposium on Dialogue Systems for Health Communication (pp.1-4). AAAI Press. https://aaai.org/papers/0006-FS04-04-006-evaluation-and-usage-patterns-in-the-homey-hypertension-management-dialog-system/ |
S20 | Ireland, D., Bradford, D., Szepe, E., Lynch, E., Martyn, M., Hansen, D., & Gaff, C. (2021). Introducing Edna: A trainee chatbot designed to support communication about additional (secondary) genomic findings. Patient Education and Counseling, 104(4), 739-749. https://doi.org/10.1016/j.pec.2020.11.007 |
S21 | Beilharz, F., Sukunesan, S., Rossell, S. L., Kulkarni, J., & Sharp, G. (2021). Development of a positive body image chatbot (KIT) with young people and parents/carers: qualitative focus group study. Journal of Medical Internet Research, 23(6), e27807. https://doi.org/10.2196/27807 |
S22 | Yang, S., Lee, J., Sezgin, E., Bridge, J., & Lin, S. (2021). Clinical advice by voice assistants on postpartum depression: cross-sectional investigation using Apple Siri, Amazon Alexa, Google Assistant, and Microsoft Cortana. JMIR mHealth and uHealth, 9(1), e24045. https://doi.org/10.2196/24045 |
S23 | Chen, J., Chen, C., B. Walther, J., & Sundar, S. S. (2021). Do you feel special when an AI doctor remembers you? Individuation effects of AI vs. human doctors on user experience. Proceedings of the CHI Conference on Human Factors in Computing Systems (pp.1-7). ACM Press. |
S24 | Wang, L., Wang, D., Tian, F., Peng, Z., Fan, X., Zhang, Z., Yu, M., Ma, X., & Wang, H. (2021). Cass: towards building a social-support chatbot for online health community. Proceedings of the ACM on Human-Computer Interaction (pp. 1-31). ACM Press. https://doi.org/10.1145/3449083 |
S25 | Huang, J., Li, Q., Xue, Y., Cheng, T., Xu, S., Jia, J., & Feng, L. (2015). Teenchat: a chatterbot system for sensing and releasing adolescents’ stress. Proceedings of the International Conference on Health Information Science (pp. 133-145). Springer. https://doi.org/10.1007/978-3-319-19156-0_14 |
S26 | Mujeeb, S., Hafeez, M., & Arshad, T. (2017). Aquabot: a diagnostic chatbot for achluophobia and autism. International Journal of Advanced Computer Science and Applications, 8(9), 209-216. |
S27 | Danda, P., Srivastava, B. M. L., & Shrivastava, M. (2016). Vaidya: a spoken dialog system for health domain. Proceedings of the 13th International Conference on Natural Language Processing (pp. 161-166). NLP Association of India. https://aclanthology.org/W16-6321 |
S28 | Galescu, L., Allen, J., Ferguson, G., Quinn, J., & Swift, M. (2009). Speech recognition in a dialog system for patient health monitoring. Proceedings of the International Conference on Bioinformatics and Biomedicine Workshop (pp. 302-307). IEEE.https://doi.org/10.1109/BIBMW.2009.5332111 |
S29 | Comendador, B. E. V., Francisco, B. M. B., Medenilla, J. S., & Mae, S. (2015). Pharmabot: a pediatric generic medicine consultant chatbot. Journal of Automation and Control Engineering, 3(2), 137-140. https://doi.org/10.12720/joace.3.2.137-140 |
S30 | Elmasri, D., & Maeder, A. (2016). A conversational agent for an online mental health intervention. Proceedings of the International Conference on Brain Informatics (pp.243-251). Springer. |
S31 | Kadariya, D., Venkataramanan, R., Yip, H. Y., Kalra, M., Thirunarayanan, K., & Sheth, A. (2019). KBot: knowledge-enabled personalized chatbot for asthma self-management. Proceedings of the International Conference on Smart Computing (pp.138-143). IEEE. https://doi.org/10.1109/smartcomp.2019.00043 |
S32 | Thompson, D., Callender, C., Gonynor, C., Cullen, K. W., Redondo, M. J., Butler, A., & Anderson, B. J. (2019). Using relational agents to promote family communication around type 1 diabetes self-management in the diabetes family teamwork online intervention: Longitudinal pilot study. Journal of Medical Internet Research, 21(9), e15318. https://doi.org/10.2196/15318 |
S33 | Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4(2), e7785. https://doi.org/10.2196/mental.7785 |
S34 | Abdullah, A. S., Gaehde, S., & Bickmore, T. (2018). A tablet based embodied conversational agent to promote smoking cessation among veterans: a feasibility study. Journal of Epidemiology and Global Health, 8(3-4), 225-230. https://doi.org/10.2991/j.jegh.2018.08.104 |
S35 | Griol, D., & Callejas, Z. (2016). Mobile conversational agents for context-aware care applications. Cognitive Computation, 8(2), 336-356.https://doi.org/10.1007/s12559-015-9352-x |
S36 | Milne, M., Luerssen, M. H., Lewis, T. W., Leibbrandt, R. E., & Powers, D. M. (2010). Development of a virtual agent based social tutor for children with autism spectrum disorders. Proceedings of the International Joint Conference on Neural Networks (pp.1-9). IEEE. https://doi.org/10.1109/IJCNN.2010.5596584 |
S37 | Kang, J., & Wei, L. (2018). "Give me the support I want!" The effect of matching an embodied conversational agent's social support to users' social support needs in fostering positive user-agent interaction. Proceedings of the 6th International Conference on Human-Agent Interaction (pp. 106-113). The Association for Computing Machinery. https://doi.org/10.1145/3284432.3284462 |
S38 | Ly, K. H., Ly, A.-M., & Andersson, G. (2017). A fully automated conversational agent for promoting mental well-being: a pilot RCT using mixed methods. Internet Interventions, 10, 39-46. |
S39 | Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016). Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Internal Medicine, 176(5), 619-625. https://doi.org/10.1001/jamainternmed.2016.0400 |
S40 | Goldenthal, S. B., Portney, D., Steppe, E., Ghani, K., & Ellimoottil, C. (2019). Assessing the feasibility of a chatbot after ureteroscopy. Mhealth, 5(8), 1-5. https://doi.org/10.21037/mhealth.2019.03.01 |
S41 | Kocaballi, A. B., Quiroz, J. C., Rezazadegan, D., Berkovsky, S., Magrabi, F., Coiera, E., & Laranjo, L. (2020). Responses of conversational agents to health and lifestyle prompts: investigation of appropriateness and presentation structures. Journal of Medical Internet Research, 22(2), e15823. |
S42 | Bickmore, T. W., Pfeifer, L. M., & Jack, B. W. (2009). Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1265-1274). ACM Press. |
S43 | Griffin, A. C., Xing, Z., Mikles, S. P., Bailey, S., Khairat, S., Arguello, J., Wang, Y., & Chung, A. E. (2021). Information needs and perceptions of chatbots for hypertension medication self-management: a mixed methods study. JAMIA Open, 4(2), ooab021. https://doi.org/10.1093/jamiaopen/ooab021 |
S44 | Huang, D.-H., & Chueh, H.-E. (2021). Chatbot usage intention analysis: veterinary consultation. Journal of Innovation & Knowledge, 6(3), 135-144. https://doi.org/10.1016/j.jik.2020.09.002 |
S45 | Goh, A. S. Y., Wong, L. L., & Yap, K. Y.-L. (2021). Evaluation of COVID-19 information provided by digital voice assistants. International Journal of Digital Health, 1(1), 1-11. https://doi.org/10.29337/ijdh.25 |
Note that the number of the included 45 studies in this table was labelled based on the order in which they appeared during the search process in the systematic review. |
Appendix 2 is available at the following link: https://archive.org/details/appendix-2_202311.