Recently, an image generated by an artificial intelligence algorithm won an art competition.
As far as I can tell, this submission violates no rules. Pixel by pixel, the image was freshly generated – it was not “plagarized” in the human sense of copying portions of another’s work wholesale. Indeed, if the AI were able to speak (which it can’t, because it’s particular design does not incorporate any means to generate language), it might describe its initial training as having “inspired” its current work.

The word “training” elides a lot of detail.
Most contemporary AI algorithms are not wholly scripted – a human programmer doesn’t write code that says, “When given the input ‘opera,’ include anthropomorphic shapes bedecked in luxurious fabrics.”
Instead, the programmer curates a large collection of images, some of which are given the descriptor “opera,” all others being, by default, “not opera.” Then the algorithm analyzes the images – treating the images as a grid of pixels, each with a particular hue and brightness, and also higher-order mathematical calculations on that grid, such as if there is a red pixel in a location, what are the odds that other nearby pixels are also red, and what shape will that red cluster take? From this analysis, the algorithm finds mathematical descriptors that separate the “opera” images from “not opera.”
An image designated “opera” is more likely to have patches with vivid hues that include bright and dark vertical stripes. A human viewer will interpret these as the shadowed folds of fabric draping an upright figure. The algorithm doesn’t need to interpret these features, though – the algorithm works only with a matrix of numbers that denote pixel colors.
In general, human programmers understand the principles by which AI algorithms work. After all, human programmers made them!
And human programmers know what sort of information was provided in the algorithm’s training set. For instance, if none of the images labeled “opera” within a particular training set showed performers sitting down, then the algorithm should not produce an opera image with alternating dark and light stripes arrayed horizontally – the algorithm will not have been exposed to horizontal folds in fabric, at least not within the context of opera.
But the particular details of how these algorithms work are often inscrutable to their creators. The algorithms are like children this way – you might know the life experiences that your child has been exposed to, and yet still have no idea why your kid is claiming that Bigfoot dips french fries into ice cream.
Every now and again, an algorithm sorts data by criteria that we humans find ridiculous. Or, rather: the algorithm sorts data by criteria that we would find ridiculous, if we could understand its criteria. But, in general, we can’t. It’s difficult to plumb the workings of these algorithms.
Because the algorithm’s knowledge is stored in multidimensional matrices that most human brains can’t grasp, we can’t compare the algorithm’s understanding of opera with our own. Instead, we can only evaluate whether or not the algorithm seems to work. Whether the algorithm’s images of “opera” look like opera to us, or whether an AI criminal justice algorithm recommends the longest prison sentences to people whom we also assume to be the most dangerous offenders.

#
So, about that art contest. I’m inclined to think that, for a category of “digitally created artwork,” submitting a piece that was created by an AI is fair. A human user still plays a curatorial role, perhaps requesting many images using the exact same prompt and then choosing the best, each generated from random seeds.
It’s a little weird, because in many ways the result would be a collaborative project – somebody’s work went into scripting the AI, and a huge amount of work went into curating and tagging the training set of images – but you could argue that anytime an artist uses a tool or filter on Photoshop, they’re collaborating with the programmers.
An artist might paint a background and then click on a button labeled “whirlpool effect,” but somebody had to design and script the mathematical function that converts the original array of pixel colors into something that we humans would then believe had been sucked into a whirlpool.
In some ways, this collaboration is acknowledged (in a half-hearted, transactional, capitalist way) – the named artist has paid licensing fees to use Photoshop or an AI algorithm. Instead of recognition, the co-creators receive money.
But there’s another wrinkle: we do not create art alone.
Even the Lascaux cave paintings – although no other paintings from that era survived until the present day, many probably existed (in places that were less protected from the elements and so were destroyed by wind & rain & mold & time). The Lascaux artist(s) presumably saw themselves as part of an artistic community or tradition.

In the development of a human artist, that person will see, hear, & otherwise experience many artistic creations by others. Over the course of our lives, we visit museums, read books, watch television, hear music, eat at restaurants – we’re constantly learning from the world around us, in ways that would be impossible to fully acknowledge. A painter might include a flourish that was inspired by a picture they saw in childhood and no longer consciously remember.
This collaborative debt is more obvious among AI algorithms. These algorithms need fuel: their meticulously-tagged sets of training images. The algorithms generate new images of only the sort that they’ve been fed.
It’s the story of a worker being simultaneously laid off and asked to train their replacement.
Unfortunately for human artists, our world is already awash in beautiful images. Obviously, I’m not saying that we need no more art! I’m a writer, in a world that’s already so full of books! The problem, instead, is that the AI algorithms have ample training sets. Even if, hypothetically, these algorithms instantly drove every other artist out of business, or made all working artists so nervous that human artists refused for any more of their work to be digitized, there’s still an enormous library of existing art for the AI algorithms to train on.
After hundreds of years of collecting beautiful paintings in museums, it would take a hefty dollop of hubris to imagine immediate stagnation if the algorithms lacked access to new human-generated paintings.
Also, it wouldn’t be insurmountable to program something akin to “creativity” in the algorithms – an element of randomness to allow the algorithm to deviate from trends in its training set. This would put more emphasis on a user’s curatorial judgment, but also lets the algorithms innovate. Presumably most of the random deviations would look bad to me, but that’s often the way with innovation – impressionism, cubism, and other movements looked bad to many people at the beginning. (Honestly, I still don’t like much impressionism.)
#
There’s no reason to expect a brain made of salty fat to have incomparable powers. Our thoughts don’t come from anything spooky like quantum mechanics – neurons are much too big to persist in superpositions. Instead, we humans are so clever because we have a huge number of neurons interconnected in complex ways. We’re pretty special, but we’re not magical.
Eventually, a brain made of circuits could do anything that we humans can.
That’s a crucial long-run flaw of capitalism – eventually, the labor efforts of all biological organisms will be replaceable, so all available income could be allocated to capital owners instead of labor producers.
In a world of physician-bots, instead of ten medical doctors each earning a salary, the owner of ten RoboMD units would keep all the money.
We’re still a ways off from RoboMD entering the market, but this is a matter of engineering. AI algorithms can already write legal contracts, do sports journalism, drive cars & trucks, create award-winning visual images – there’s no reason to believe that an AI could never treat illnesses as well as a human doctor, clean floors as well as a human janitor, write code as well as a human programmer.
In the long run, all our work could be done by machines. Human work will be unnecessary. Within the logic of capitalism, our income should drop to zero.
Within the logic of capitalism, only the owners of algorithms should earn any money in the long-run. (And in the very long run, only the single owner of the best algorithms should earn any money, with all other entities left with nothing.)

#
Admittedly, it seems sad for visual artists – many of whom might not have nuanced economics backgrounds – to be among the people who experience the real-world demonstration of this principle first.
It probably feels like a very minor consolation to them, knowing that AI algorithms will eventually be able to do your job, too. When kids play HORSE, nobody wants to be out first.
But also, we have a choice. Kids choose whether or not to play HORSE, and they choose what rules they’ll play by. We (collectively) get to choose whether our world will be like this.
I’m not even that creative, and I can certainly imagine worlds in which, even after the advent of AI, human artists still get to do their work, and eat.