When AI makes art, humans provide the creative spark

When AI makes art, humans provide the creative spark

New products often come with disclaimers, but in April the artificial intelligence company OpenAI issued an unusual warning when it announced a new service called DALL-E 2. The system can generate vibrant and realistic photos, paintings, and illustrations in response to a line of text or an uploaded image. Some of the OpenAI release notes warned that “the model can increase the efficiency of performing some tasks, such as editing photos or producing stock photos, which could crowd out jobs for designers, photographers, models, editors and artists.”

So far that has not happened. People who have been given early access to DALL-E have found that it elevates human creativity rather than rendering it obsolete. Benjamin von Wong, an artist who creates installations and sculptures, says it has actually increased his productivity. “DALL-E is a great tool for someone like me who can’t draw,” says from wong, which uses the tool to explore ideas that can later be built into physical artworks. “Instead of having to sketch out concepts, I can simply generate them through several prompt sentences.”

DALL-E is one of many new AI tools for image generation. Aza Raskinan artist and designer, used open source software to generate a music video for the musician Zia Cora who was featured on the TED conference in April. The project helped convince him that image-generating AI will lead to an explosion of creativity that will permanently change humanity’s visual environment. “Anything that can be visual will have one,” he says, which can disrupt people’s intuition for judging how much time or effort has gone into a project. “Suddenly we have this tool that makes what was hard to imagine and visualize can be easily realized.”

It’s too early to know how such transformative technology will ultimately affect illustrators, photographers and other creatives. But right now, the idea that artistic AI tools will displace workers from creative jobs — as humans sometimes describe robots replacing factory workers — seems like an oversimplification. Even for industrial robots, which perform relatively simple, repetitive tasks, the evidence is mixed. Some economic studies suggest that the adoption of robots by companies results in lower employment and lower wages in general, but there is also evidence that in certain situations robots increase job opportunities

“There is far too much doom and gloom in the art community,” where some people are too quick to assume that machines can replace human creative work, says Noah Bradley, a digital artist who posts YouTube tutorials on how to use AI tools. Bradley believes the impact of software like DALL-E will be similar to the effect of smartphones on photography, making visual creativity more accessible without replacing professionals. Creating powerful, usable images still requires a lot of careful tweaking after something is first generated, he says. “There’s a lot of complexity in creating art that machines aren’t ready for yet.”

The first version of GIVE HERannounced in January 2021, was a milestone for computer-generated art. It showed that machine learning algorithms fed many thousands of images as training data could reproduce and recombine features of those existing images in new, coherent and aesthetically pleasing ways.

A year later, DALL-E 2 significantly improved the quality of the images that can be produced. It can also reliably adopt various artistic styles and can produce images that are more photo-realistic. Want a studio quality photo of a Shiba Inu dog wearing a beret and black turtleneck? Just type that in and wait† A steampunk illustration of a castle in the clouds? No problem† Or a 19th century painting of a group of women signing the Declaration of Independence? Good idea

Many people who experiment with DALL-E and similar AI tools describe them less as a replacement than as a new kind of artistic assistant or muse. “It’s like talking to an alien entity,” says David R Munson, a photographer, writer, and English teacher in Japan who has been using DALL-E for the past two weeks. “It tries to understand a text prompt and communicate back to us what it sees, and it just squirms in this amazing way and produces things you really don’t expect.”