# Artist consent is a weird hill to die on

I can't help but call it the ongoing seething by artists at the new AI image drawing techniques. I understand some of the emotion, it's the same emotion that made some artists hate photography when that came about, it's the same emotion that's part of certain online subcultures who get very protective over OCs ("-kin" types) and the like, and it's related to the more modern emotion of "tech bad" that caused similar seething at the whole ongoing NFT saga. (Note: I have no strong positive or negative feelings about NFTs, I don't own any, I don't intend to own or 'mint' any.)

But it's just emotional lashing out. Unfortunately the arguments tend to be terrible. They center around the same sort of things big IP outfits have used for years, often to bully the kinds of small artists who are themselves seething, with rhetoric around "theft" and "consent", monetary compensation and sometimes "licensing". They rarely look into how the AIs actually work, which does in fact resemble a learning process more so than a memorization and copying process, and the weird part about it is that they fail to take into consideration where we'll be in a year or two.

Prediction: stable diffusion-like AI artists released in a couple years will not need to have been trained on any non-public-domain data. And maybe not even much of that. We can draw a comparison with AlphaGo: it was trained on human games, and did well. The next versions were not trained on human games, solely on self-play, and this version was far superior. Is there a way to train stable diffusion on self-drawings, and at the end still be able to draw interesting things for humans? What I expect is that a fully trained model can be given one-shot examples, and a description, and without changing anything in the model (i.e. learning the examples), output something that pleases the human. Much like I can go to an amusement park and find an artist who has never seen me before but because of their training can draw a pleasing portrait of me.

So it's a bit weird that current artists are up in arms about the whole consent thing, when in a few years it's not likely going to matter. The AI will not have trained on your cool sci-fi drawing, but I as an individual can later show the AI such a drawing, and ask it to draw something similar, and I expect it to do well despite not having trained on it or anything like it, because it understands some fundamental things about drawing that artists themselves learn that let them draw new styles. I as an individual might run into copyright troubles if I try to use the output for something, but that is already the case -- i.e. I can ask the AI to draw a particular pokemon, and I can make a very convincing case that the output is nothing at all like anything in the training data, it is not "copying" from anyone or from any collection of people, it is truly original. However, it still depicts the particular pokemon, whose copyright is owned by Nintendo, and Nintendo ultimately can decide whether to try and sue or not should such art be used for anything without their approval. (And something some artists seem to conveniently forget, though perhaps it's mostly foreign artists, in the US we have a thing called Fair Use, even infringing for-profit uses are not necessarily a problem as far as the law is concerned.)

Anyway, just a minor rant I wanted to get out. Check back within a couple years and we'll see if my prediction was true or not, I give it 80% odds.

#### Posted on 2023-03-06 by Jach

LaTeX allowed in comments, use $\\...\\$\$ to wrap inline and $$...$$ to wrap blocks.