Glaze and Nightshade

January 29, 2024

As generative AI models continue to grow and progress, the need for new content to train them on increases. This insatiable appetite for data clashes against the creative spirit of individual artists, though it’s not only individual artists being affected as we see in the New York Times v. OpenAI case. The somewhat digestive nature of model creation renders traditional techniques for protecting images, such as watermarks ineffective.

Glaze and Nightshade, two new tools built by the University of Chicago, aim to restore some of the balance that has been lost between content creators and model creators.

Glaze is a defensive tool for images. By making imperceptible tweaks to artwork, it effectively camouflages them from AI’s prying algorithms[1]. It’s like giving artworks a cloaking device, shielding the unique styles of artists from being co-opted into the vast, impersonal data troves of corporations. Glaze isn’t just protecting individual pieces of art; it’s safeguarding the essence of artistic individuality.

Then there’s Nightshade, which takes a more offensive approach. It turns images into AI kryptonite – if a corporation trains their AI on these ‘poisoned’ images without consent, the result is a confused mess[2]. Nightshade isn’t just a tool; it’s a statement against unchecked data harvesting.

Together, Glaze and Nightshade provide artists and content creators with the means to better control how their work is used.

[1]. https://glaze.cs.uchicago.edu/what-is-glaze.html
[2]. https://nightshade.cs.uchicago.edu/whatis.html