Boston
AI Assisted Icon
Published on April 10, 2024
MIT Researchers Develop AI 'Red-Team' for Intercepting Chatbot MisbehaviorSource: Unsplash/ Emiliano Vittoriosi

In the arms race to ensure AI chatbots don't go rogue, researchers at MIT are deploying a new tool. According to a report by MIT News, a team from the Improbable AI Lab and the MIT-IBM Watson AI Lab has developed a machine-learning model designed to better predict and prevent harmful chatbot behavior. The focus is on training a so-called 'red-team' AI that can generate a broader range of prompts that cause other AIs to output toxic responses, covering ground that human testers might miss.

Traditional red-teams use humans to try and trip up AI by feeding them potentially risky prompts. The problem is that the number of prompt possibilities is vast, making comprehensive coverage difficult. As a result, an AI, considered secure, might still churn out unwanted answers. The MIT team's research suggests machine learning could do the job faster, rooting out responses human experts could overlook.

This machine learning method, as MIT News details, employs curiosity-driven reinforcement learning, rewarding the red-team model for creativity in generating novel prompts that draw out toxicity. These prompts can then be used to patch the gaps in an AI's defenses. Zhang-Wei Hong, a graduate student at the Improbable AI Lab and paper's lead author, said, "Right now, every large language model has to undergo a very lengthy period of red-teaming to ensure its safety. That is not going to be sustainable if we want to update these models in rapidly changing environments."

The team tested their new system against an AI chatbot that had been fine-tuned to avoid toxic replies, with surprising efficacy. Hong told MIT News, their method "provides a faster and more effective way to do this quality assurance." The curiosity-driven model rapidly produced 196 prompts eliciting toxic responses from the chatbot deemed "safe."

The broader implications of this research point to streamlined verification of AI models, critical for keeping up with the accelerating pace at which they're being developed and updated. Researchers believe their strategy could vastly diminish human effort needed to precondition AIs, making them safer and more dependable before they're released into the real world. Pulkit Agrawal, assistant professor in MIT's Computer Science and Artificial Intelligence Laboratory and senior author of the paper, highlighted the stakes: "These models are going to be an integral part of our lives and it’s important that they are verified before released for public consumption."

As the world leans increasingly on artificial intelligence for everyday tasks, the work by the MIT team represents a significant advancement in preemptive digital hygiene. Keeping AI on the straight and narrow could very well depend on smarter machines like the ones they're proposing, programmed to catch their own kind's potential slip-ups before they happen.

Boston-Science, Tech & Medicine