A new tool known as Nightshade could help artists protect their copyrighted work from being used for AI model training, by corrupting the systems which scrape media data without creator consent, reported the MIT Technology Review earlier this month.
Nightshade comes from the laboratory of Ben Zhao, a professor at the University of Chicago. The digital tool would allow artists to modify the pixels of their pieces so that the art still looks the same to human viewers, but would confuse AI systems that use it for training. MIT shared photos showing “poisoned” image generation models which resulted in warped pictures of everyday objects. These would likely frustrate future end users of the model - especially if they have paid for enterprise versions of it.
Multiple pending lawsuits against companies such as ChatGPT-maker OpenAI, Meta, Stability AI, and Google allege that these firms illegally harvested copyrighted works such as novels and art from creators to build generative AI systems like chatbots or text-to-image generators without seeking the creators’ permission or paying them for use.
OpenAI has defended the use of copyrighted media for innovative purposes and scientific progress, claiming this is protected under the fair use doctrine.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
Still, in the face of backlash, more tech firms are allowing people to opt out of such training regimes. However, Nightshade would go a step further by enabling artists to damage the data sets of AI companies which use their copyrighted pieces without permission.
Zhao’s team is also behind a similar tool called Glaze, which helps artists by again confusing AI models.