hero image

There is growing concern regarding AI’s near-perfection and its compatibility with human imperfections such as errors and indecision. Unlike human decision-making, AI often fails to account for these traits. In response, a team of University of Cambridge researchers is leading efforts to integrate human uncertainty into AI systems, acknowledging the need to embrace imperfection in artificial intelligence.

Researchers to align machine learning behavior to improve AI trustworthiness

AI systems, reliant on human feedback, assume human infallibility. However, real-life decisions are prone to errors and uncertainty. Researchers seek to align human behavior with machine learning to enhance trustworthiness in AI-human interfaces, particularly in critical domains like medical diagnostics, reducing risks associated with inaccuracies and doubt.

Researchers adapted an image classification dataset to incorporate human participants’ uncertainty levels during labeling. Systems trained with uncertain labels improved in handling doubtful feedback, aligning with the idea that “human-in-the-loop” machine learning mitigates risks. However, occasionally, the inclusion of human doubt led to performance dips in AI-human systems, prompting further investigation.

Katherine Collins, the lead author of the study from Cambridge’s Department of Engineering, emphasizes the significance of uncertainty in human reasoning. She notes the discrepancy between human and AI approaches to uncertainty. While much effort has been dedicated to addressing model uncertainty in AI, there is a dearth of focus on integrating human perspective and uncertainty into AI models.

In situations with low-stakes errors like mistaking a stranger for a friend, human uncertainty is not a significant concern. However, in contexts where safety is paramount, such as in safety-sensitive applications, human uncertainty can pose serious risks.

AI systems assume human infallibility in decisions

Numerous human-AI systems incorrectly presume human infallibility in decision-making, contrary to reality where humans err. This study investigates the influence of acknowledging uncertainty, particularly in critical domains like healthcare. The co-author stresses the necessity for improved tools to fine-tune models, enabling individuals to openly acknowledge uncertainty. While machines can be trained with unwavering confidence, humans often face challenges in providing the same certainty, posing a hurdle for machine learning models.