Austin/ Science, Tech & Medicine
AI Assisted Icon
Published on May 20, 2024
UT Austin Researchers Develop AI to Protect Creative Works from Replication Amidst Legal ChallengesSource: Unsplash / Chirag Tripathi

In a significant shift for artificial intelligence (AI), researchers from The University of Texas at Austin are vying to curb the rampant issue of creative works being mimicked by AI. The team, spearheading a method termed Ambient Diffusion, seeks to train AI on heavily corrupted images to the point where original images are indistinguishable, thereby preventing direct replication of copyrighted material.

High-profile AI models such as DALL-E, Midjourney, and Stable Diffusion have become embroiled in legal disputes. Allegations suggest that not only do these programs hallucinate information, but they also have a tendency to 'memorize' copyrighted content, potentially infringing upon the works of artists. Trained on billions of image-text pairs—data not open to public scrutiny—these models tread a fine line between inspiration and imitation, robustly converting textual prompts to striking visuals.

The UT Austin team's study was initially featured at NeurIPS, a preeminent machine learning conference, in 2023, and has made strides since then. In a development that could ripple through the spheres of research and innovation, a later paper, "Consistent Diffusion Meets Tweedie," gained the nod of approval for the 2024 International Conference on Machine Learning. The enhancement of Ambient Diffusion, conducted in collaboration with Constantinos Daskalakis from the Massachusetts Institute of Technology, expanded the capabilities to handle various noise disruptions and to scale up for larger data sets.

"The framework could prove useful for scientific and medical applications, too," Adam Klivans, a computer science professor involved in the pioneering work, told The University of Texas at Austin. Their initial experiments trained a diffusion model on a curated set of 3,000 celebrity images but altered the raw data by randomly masking up to 90% of the pixels. The result was a model capable of crafting human likenesses distinct from the training images, potentially sidestepping contentious copyright disputes.

Giannis Daras, a graduate student in computer science leading the research, emphasized the balance struck by the Ambient Diffusion technique. "Our framework allows for controlling the trade-off between memorization and performance," Daras said. "As the level of corruption encountered during training increases, the memorization of training set decreases." This offers a promising route for the evolution of AI in not just technology, but also in safeguarding intellectual property.

The promise of the Ambient Diffusion framework validates UT Austin's commitment to addressing societal needs through advanced AI, recognizing 2024 as the "Year of AI." This initiative includes broad cooperation between UT Austin, the University of California, Berkeley, and MIT. The endeavor is backed by funding from notable entities like the National Science Foundation, Western Digital, Amazon, and Cisco, alongside various other fellowship and endowment support for the research team.

Austin-Science, Tech & Medicine