hero image

University of Colorado School of Medicine researchers are investigating whether chatbots used in healthcare contribute to bias. These chatbots, which simulate conversation and are being adopted globally, are the focus of the study conducted to examine patients’ experiences with artificial intelligence (AI) programs.

Chatbots should appear like actual physicians

There have been ethical questions surrounding the appearance of chatbot avatars. Currently, avatars range from human-like representations to cartoon characters and logos. Researchers suggest that chatbots could be designed to resemble a patient’s actual physician in appearance and voice. This significant design decision raises concerns about potential biases and the ethical implications of nudging.

The paper “More than just a pretty face? Nudging and bias in chatbots” calls for a thorough examination of chatbot technology from a health equity perspective to determine its impact on patient outcomes.

Internal medicine professor Annie Moore emphasizes the importance of comprehending patients’ perceptions and the potential impact on trust and compassion when chatbots serve as their initial encounter with the healthcare system.

The researchers noted a significant rise in the prevalence of chatbots during the COVID-19 pandemic. Numerous health systems developed chatbots to serve as symptom checkers. Users could input their symptoms, such as cough and fever, and receive guidance on appropriate actions. This prompted the researchers to delve into the ethical considerations surrounding the wider application of this technology.

Chatbot avatars could manipulate patients to share personal information

The researchers discovered that people’s perception of a chatbot’s race or ethnicity could impact their interaction. If individuals perceive the chatbot to be of the same race as themselves, they may be more likely to share personal information. This raised ethical concerns for the researchers, particularly regarding the design of healthcare chatbots and the potential for unintentional manipulation of patients. While evidence suggests that people may disclose more information to chatbots than humans, the question arises about whether it is ethically acceptable to manipulate chatbot avatars to enhance their effectiveness, particularly when influencing individuals’ health decisions.