Algorithms can't escape the past.
AI art is having a moment In late September, OpenAI made its DALL-E 2 AI art generator widely [available to the public](, allowing anyone with a computer to make one of those striking, slightly bizarre images that seem to be floating around the internet more and more these days. DALL-E 2 is by no means the first AI art generator to open to the public (the competing AI art models [Stable Diffusion]( and [Midjourney]( also launched this year), but it comes with a strong pedigree: Its cousin, the text-generating model known as GPT-3 â itself the subject of much intrigue and [multiple]( [gimmicky]( [stories]( â was also developed by OpenAI. Last week, Microsoft [announced]( it would be adding AI-generated art tools â powered by DALL-E 2 â to its Office software suite, and in June DALL-E 2 was used to design the [cover of Cosmopolitan magazine](. The most techno-utopian proponents of AI-generated art say it provides a democratization of art for the masses; the cynics among us would argue itâs copying human artists and threatening to end their careers. Either way, it seems clear that AI art is here, and its potential has only just begun to be explored. Naturally, I decided to try it. As I scrolled through examples of DALL-Eâs work for inspiration (I had determined that my first attempt ought to be a masterpiece), it seemed to me that AI-generated art didnât have any particular aesthetic other than, maybe, being a bit odd. There were pigs wearing sunglasses and floral shirts while riding motorcycles, raccoons playing tennis, and Johannes Vermeerâs Girl With a Pearl Earring, tweaked ever so slightly so as to replace the titular girl with a sea otter. But as I kept scrolling, I realized there is one unifying theme underlying every piece: AI art, more often than not, looks like Western art. âAll AI is only backward-looking,â said [Amelia Winger-Bearskin](, professor of AI and the Arts at the University of Floridaâs Digital Worlds Institute. âThey can only look at the past, and then they can make a prediction of the future.â For an AI model (also known as an algorithm), the past is the data set it has been trained on. For an AI art model, that data set is art. And much of the fine art world is dominated by white, Western artists. This leads to AI-generated images that look overwhelmingly Western. This is, frankly, a little disappointing: AI-generated art, in theory, could be an incredibly useful tool for imagining a more equitable vision of art that looks very different from what we have come to take for granted. Instead, it stands to simply perpetuate the colonial ideas that drive our understanding of art today. To be clear, models like DALL-E 2 can be asked to generate art in the style of any artist; asking for an image with the modifier âUkiyo-e,â for example, will create works that mimic Japanese woodblock prints and paintings. But users must include those modifiers; they are rarely, if ever, the default. Winger-Bearskin has seen the limits of AI art firsthand. When one of her students used images generated by Stable Diffusion to make a video of a nature scene, she realized the twilight backgrounds put out by the AI model looked oddly similar to the scenes painted by Disney animators in the 1950s and '60s â which themselves had been [inspired]( by the French Rococo movement. âThere are a lot of Disney films, and what he got back was something we see a lot of,â Winger-Bearskin told Recode. âThere are so many things missing in those datasets. There are millions of night scenes from all over the world that we would never see.â AI bias is a [notoriously difficult problem](. Left unchecked, algorithms [can perpetuate racist and sexist biases](, and that bias extends to AI art as well: as Sigal Samuel [wrote for Future Perfect]( in April, previous versions of DALL-E would spit out images of white men when asked to depict lawyers, for example, and depict all flight attendants as women. OpenAI has been [working]( to mitigate these effects, fine-tuning its model to try to weed out stereotypes, though researchers still disagree on whether those measures have worked. But even if they work, the problem of artistic style will persist: If DALL-E manages to depict a world free of racist and sexist stereotypes, it would still do so in the image of the West. âYou canât fine-tune a model to be less Western if your dataset is mostly Western,â [Yilun Du](, a PhD student and AI researcher at MIT, told Recode. AI models are trained by scraping the internet for images, and Du thinks models made by groups based in the United States or Europe are likely predisposed to Western media. Some models made outside the United States, like ERNIE-ViLG, which was developed by the Chinese tech company Baidu, do a better job generating images that are more culturally relevant to their place of origin, but they come with issues of their own; as the [MIT Technology Review reported]( in September, ERNIE-ViLG is better at producing anime art than DALL-E 2 but refuses to make images of Tiananmen Square. Because AI is backward-looking, itâs only able to make variations of images it has seen before. That, Du says, is why an AI model is unable to create an image of a plate sitting on top of a fork, even though it should conceivably understand each aspect of the request. The model has simply never seen an image of a plate on top of a fork, so it spits out images of forks on top of plates instead. Injecting more non-Western art into an existing dataset wouldnât be a very helpful solution, either, because of the overwhelming prevalence of Western art on the internet. âItâs kind of like giving clean water to a tree that was fed with contaminated water for the last 25 years,â said Winger-Bearskin. âEven if itâs getting better water now, the fruit from that tree is still contaminated. Running that same model with new training data does not significantly change it.â Instead, creating a better, more representative AI model would require creating it from scratch â which is what Winger-Bearskin, who is a member of the Seneca-Cayuga Nation of Oklahoma and an artist herself, does when she uses AI to create art about the climate crisis. Thatâs a time-consuming process. âThe hardest thing is making the data set,â said Du. Training an AI art generator requires millions of images, and Du said it would take months to create a data set thatâs equally representative of all the art styles that can be found around the world. If thereâs an upside to the artistic bias inherent in most AI art models, perhaps itâs this: Like all good art, it exposes something about our society. Many modern art museums, Winger-Bearskin said, give more space to art made by people from underrepresented communities than they did in the past. But this art still only makes up a small fraction of what exists in museum archives. âAn artistâs job is to talk about whatâs going on in the world, to amplify issues so we notice them,â said [Jean Oh](, an associate research professor at Carnegie Mellon Universityâs Robotics Institute. AI art models are unable to provide commentary of their own â everything they produce is at the behest of a human â but the art they produce creates a sort of accidental meta-commentary that Oh thinks is worthy of notice. âIt gives us a way to observe the world the way it is structured, and not the perfect world we want it to be.â Thatâs not to say that Oh believes more equitable models shouldnât be created â they are important for circumstances where depicting an idealized world is helpful, like for childrenâs books or commercial applications, she told Recode â but rather that the existence of the imperfect models should push us to think more deeply about how we use them. Instead of simply trying to eliminate the biases as though they donât exist, Oh said, we should take the time to identify and quantify them in order to have constructive discussions about their impacts and how to minimize them. âThe main purpose is to help human creativity,â Oh said, whoâs researching ways to create more intuitive human-AI interactions. âPeople want to blame the AI. But the final product is our responsibility.â âNeel Dhanesha, reporter [Alex Jones outside the Sandy Hook trial in Connecticut, September 2022.]( Joe Buglewicz/Getty Images [Alex Jones lost a $1 billion trial. Why is Infowars still streaming?]( [Jones says his enemies want him off the air. US bankruptcy law is on his side, for now.]( [The Meta headquarters sign at 1 Hacker Way in Menlo Park, California.]( Justin Sullivan/Getty Images [Silicon Valley is starting to cave to European regulators]( [The United Kingdom is forcing Meta to sell off Giphy. Whoâs next?]( [Amazon warehouse workers carrying protest signs outside a building bearing the Amazon logo.]( Irfan Khan/Los Angeles Times via Getty Images [The Amazon Labor Union suffers another loss but vows to keep fighting]( [Amazon workers at the ALB1 warehouse near Albany, New York, voted 406-206 against unionizing.]( [Learn more about RevenueStripe...]( [An illustration of a woman looking out from the top of a ladder.]( iStockphoto/Getty Images [Women in leadership are leaning out (and into a better job)]( [Leaders are leaving at the highest rate on record.]( [Pattern made of pink piggy banks on a blue background.]( Getty Images [Companies are being forced to reveal what a job pays. Itâs a start.]( [New pay transparency laws will help, but they still arenât enough to eliminate the pay gap.]( [Listen to This] [Listen to the Gray Area]( [How we got to January 6]( Sean Illing talks with war reporter and New Yorker contributing writer Luke Mogelson about the insurrection at the Capitol on January 6, 2021: what happened, how it happened, and how Luke's experience at the Capitol on the sixth shaped his view of what's coming next. [Listen on Apple Podcasts.]( [This Is Cool] [Take an immersive audiovisual tour of how street vendors shape the sonic landscape of Mexico City]( [Learn more about RevenueStripe...]( [Facebook]( [Twitter]( [YouTube]( This email was sent to {EMAIL}. Manage yourâ¯[email preferences]( orâ¯[unsubscribe](param=recode). View our [Privacy Notice]( and our [Terms of Service](. Vox Media, 1201 Connecticut Ave. NW, Floor 12, Washington, DC 20036. Copyright © 2022. All rights reserved.