Boston/ Science, Tech & Medicine
AI Assisted Icon
Published on December 08, 2023
MIT Engineers New AI Trust Tutorials to Boost Human-AI Synergy in Critical TasksSource: Massachusetts Institute of Technology Official Website

An MIT research team has crafted an automated system designed to better guide humans on when to trust an AI assistant, especially in task-critical professions like radiology. In a move to close the gap between AI reliance and human discernment, this novel onboarding train aims to outline rules for when the model's advice is reliable, as reported by MIT News.

The system's function is to automatically identify moments wherein a professional, such as a doctor or content moderator, might erroneously trust an AI's prediction. By embedding these collaborative instances into a latent space, the system can pinpoint regions where the AI collaboration is incorrect and quickly teach users to make better trust decisions. This method led to a roughly 5 percent accuracy boost in human-AI collaborations without bogging down the decision-making process.

"So often, people are given these AI tools to use without any training to help them figure out when it is going to be helpful. That’s not what we do with nearly every other tool that people use—there is almost always some kind of tutorial that comes with it. But for AI, this seems to be missing. We are trying to tackle this problem from a methodological and behavioral perspective," Hussein Mozannar, an MIT graduate student and lead author of the paper on the training process, told MIT News.

The training utilizes an AI to generate natural language rules from data and iteratively refine them for more accurate predictions. Veterans in the field of AI commend the effort as essential. "People are adopting AI systems willy-nilly, and indeed, AI offers great potential, but these AI agents still sometimes make mistakes. Thus, it’s crucial for AI developers to devise methods that help humans know when it’s safe to rely on the AI’s suggestions," explained Dan Weld, a professor emeritus at the University of Washington, in praise of the MIT team's efforts.

However, the system does have limitations. A key concern is the need for significant data to create effective training; without it, the procedure may not be fully beneficial. The research, part-funded by the MIT-IBM Watson AI Lab, continues to seek improvements such as leveraging unlabeled data for training and conducting larger studies to gauge the onboarding's long-term impact on user-AI interactions.

Boston-Science, Tech & Medicine