keypoint-blogs

INFOGRAPHIC: Poisoning Data to Protect Artists

Written by Mark DiMattei | Mar 21, 2024 12:00:00 AM

 

Sign up for The Key Point of View, our weekly newsletter of blogs and podcasts!

 

We, as a society, don’t generally give artists their due. There’s a lot of anecdotal evidence of artists being told they should charge less for their work—if they’re not being begged for free things from the start. Others have their work stolen by other would-be “artists” or larger companies, who plagiarize their designs for profit. Social media once presented a way for artists to show their work in unique, digital galleries where they could curate and proudly show off their hours of work. Now, it’s become a way for generative artificial intelligence (AI) to gain countless amounts of source data to feed into their programs.

 

However, some artists are fighting back. While there is always the possibility of pursuing text-to-image AI generators through the legal system, or searching databases to see if your artwork has been put through an AI training system, a team based out of the University of Chicago is designing software that would allow digital artists to enhance their images by “data poisoning”. By applying their programs called Glaze or Nightshade, any images fed into an AI system would either incorrectly tag the image’s metadata to prevent the style from being copied (Glaze) or, with enough poisoned images, cause the AI to change the basic concepts for certain prompts (Nightshade). This means that users could no longer ask for “A red apple on a plate in the style of Van Gogh” or that requests for a happy couple having a discussion could instead create a herd of wild horses running across the plains.

 

 

Keypoint Intelligence Opinion

While some websites allow for artists to opt-out from having their work be available for AI text-to-image generators—DeviantArt, which bills itself as “the largest online art gallery and community”, only recently set up its site to default to submissions not being available to AI after a lot of pushback from its users—others have found that it’s hard to enforce it. As the Glaze Team says on their website: “Opt-out lists have been disregarded by model trainers in the past and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives cannot be identified with high confidence.”

 

By now, most of us have seen the benefits of using generative AI—but it’s hard to say how ethical it is to use it without more regulation on how the content is created or how generative AI builds its sources to train/produce said images. Sam Altman, the CEO of OpenAI and (arguably) the face of artificial intelligence, has been very open about the need for more government oversight into AI and how it’s used. While some states and governments are starting to make proposals about AI use, the EU’s Artificial Intelligence Act isn’t set to go into effect until later this year—and really seems to be more concerned about labelling generative AI images and deepfakes as such rather than helping artists protect themselves.

 

As with all technology, we have to be careful and informed about how it’s used as well as how it’s made. No one is saying that people can’t use generative AI to assist them in creating new forms of art (visual or written), but we need to be better. Artists need to actively opt-in (rather than go through labyrinths to opt out) and be compensated for the time they put into the original pieces AI pulls from—or we’re not going to have any art to appreciate from any source. Because why bother spending hours on hours to perfect a talent if someone is just going to chop it up for parts to stitch into a (not so) exquisite corpse?

 

Browse through our Industry Reports Page (latest reports only). Log in to the InfoCenter to view research on artificial intelligence and generative AI through our Workplace CompleteView Advisory Service. If you’re not a subscriber, contact us for more info by clicking here.