Art meets AI
A Feature article from Applied Arts magzine
April 15, 2025
Article written by Will Novosedlik and Aria Novosedlik. Image by Aria Novosedlik/Midjourney.
ON THE EVENING OF MARCH 21, TECH BLOGGER MATT WOLFE WAS IN A STATE OF HIGH EXCITEMENT.
“We’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.”
– Bill Gates, March 29, 2023
He had spent the day attending the Nvidia GTC conference, where CEO Jenson Huang made some big announcements, after which Wolfe barely had time to catch his breath before jumping in front of the miC to begin his podcast.
“LAST WEEK WAS THE CRAZIEST WEEK I'VE EVER SEEN", the blogger breathlessly proclaimed. “We saw huge announcements from Open AI launching GPT 4 to Google announcing they’re going to be putting AI inside of their Workspace tools to Mid Journey 5 and Microsoft 365 co-pilot. We ended the week with Stable Diffusion launching Reimagine, and this week – it’s only Tuesday – we don’t only have one announcement or two announcements, we have five huge, insane announcements, from Bill Gates to Nvidia’s Jensen Huang to Google to Adobe to Microsoft. It’s just getting crazier and crazier. Things that people said we won’t see for years, we’re seeing in weeks. That’s how fast things are moving. It’s nerd orgasm all over the place right now!”
The most interesting news for creatives–especially illustrators and photographers–was the announcement of a partnership with Getty Images, Shutterstock and Adobe, meaning that their AI image generation will use licensed images only. There’s no gray area on how the images used to train these were obtained, so no need to lose sleep over copyright anymore, an issue that has resulted in lawsuits.
“Speaking of Adobe and Nvidia”, continued Wolfe, “today Adobe also announced Firefly Beta. This is Adobe’s new AI art model, completely trained on licensed and open-source images. So again, no worry about future copyright issues on any of the images generated from Adobe Firefly. Even more interesting, Adobe Firefly plans to compensate artists who allow use of their images to train these models.”
Jensen Huang’s “We are at the iPhone moment of AI” might have been the sound bite of the day, but it felt a lot more like an everything, everywhere, all at once moment.
Fear of Firing
The hype cycle has a tendency to stoke both fear and excitement, and the hype around generative AI has gone nuclear. The fear is being stoked by all the old tropes about the robots taking our jobs away, or maybe even replacing humans altogether because we know the planet couldn’t give a damn if we all disappeared tomorrow and, hey, maybe AI will do a better job of running things anyway. The excitement is that AI will bring unimaginable benefits to humankind, make us smarter, faster, stronger and far more able to tackle the huge problems we are facing right now.
The reality is, we have no idea where it will take us.
On the same day that Wolfe and Huang were gushing about the power of AI as a generative technology, Bill Gates descended from the heavens to share his own words of wisdom on the subject. “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone”, said Gates. “It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it. No matter what, the subject of AI will dominate the public discussion for the foreseeable future.”
He encouraged us to try to balance our understandable fears about the downsides of AI with its ability to improve people’s lives. “Finally, we should keep in mind that we’re only at the beginning of what AI can accomplish. Whatever limitations it has today will be gone before we know it.”
Does that mean your job will be gone before you know it? Shane Saunderson, CEO and founder of AI consultancy Artificial Futures, has this to say: “If you look at the history of technology, that is how it’s going to go. Ask the typographers how it went when desktop publishing came on the scene and wiped out a five-hundred-year-old craft. How is this any different?”
Anyone who’s played with large language models like ChatGPT or text-to-image models like Midjourney has experienced the speed and volume of their output and the endless variations that can be created in almost no time at all. This will have a significant impact on the process, roles and business models of most creative firms. On one hand, the mundane tasks that are often given to juniors will be more easily and quickly done by AI. But taking those tasks away means taking away the learning that comes from executing those tasks. It also means reducing headcount and payroll. The AI doesn’t need a paycheque.
For juniors, the upside could be that you’re no longer a junior designer, you’re a junior creative director. The time freed up when AI takes over mundane tasks can now be spent on higher-level tasks. According to Iain Tate, partner at creative studio food.xyz, “AI gives everyone a shot at leveling up what they’re capable of. It gives someone who couldn’t code the ability to code, it gives someone who can’t fly around the world to take photographs anymore because of climate change the ability to make photographs of anything, anywhere in the world without moving from their desk.” In other words, it gives juniors a chance to focus on tasks and skills that they would normally not be exposed to until later in their development.
Aria Novosedlik/Midjourney
As part of researching this article, designer/researcher Aria Novosedlik prompted Midjourney to create a website for a line of sneakers. Here are the standard 4 variations on the web page that Midjourney delivered. The amazing thing here is that not only was the AI capable of generating page layouts, but it was also able to design the individual products themselves. It’s easy to see how this is going to catch on with product and UI designers.
The volume, variation and velocity AI is capable of now also puts pressure on creative firms to redefine the value they are bringing to clients. Cam Wykes is Executive Experience Director, North America, for global content marketing shop Oliver. Says Wykes, “We’re currently challenged by what clients think generative AI can bring to the table. They tend to focus on the reduced costs delivered by automation, because it will definitely save hours. We’re okay with that, but as we move from using AI to streamline tasks to full on task replacement, we need to figure out where else our value lies.”
For a content marketing company like Oliver, which routinely deploys thousands of assets all at once across several channels and markets, there is certainly a ton of value in AI-driven asset deployment and management expertise. And there is also value in using that expertise to educate clients. Says Wykes, “You want to get out in front of it and say, here’s what it is, here’s how you use it, here are some of the tools that we recommend. Here’s the expectation, here’s the reality, and here’s our process. Whether they’re asking for it or their boss- es are asking for it, they’re going to have to start integrating generative AI into their business models. We’d like to be their preferred advisors.”
Another way in which creatives can add value is by dedicating the time saved on mundane, automated tasks to more robust research. Jason Tselentis, who teaches design at Winthrop University in South Carolina says “The ability to leave the studio and go out into the street or into some other environment to conduct an ethnography in places where you wouldn’t normally go is where the gold is for creativity. As a design- er I’d be very interested if I could automate a part of the process that normally would keep me from doing better research.”
Jason Tselentis/Midjourney
Jason Tselentis teaches design at Winthrop University in South Carolina. He is a dog owner, and his favourite breed is pugs. Here he prompts the AI to mash up the pugs with Superman, with satisfying results.
Abuse cases
The hype around generative AI whipped the internet into a frenzy as these super-powered chatbots promised to chew their way through one industry after another, disrupting and destabilizing as they go. Then on March 29th, the New York Times published an open letter urging a six-month moratorium on the development of the most powerful AI platforms, signed by over 1,500 (at time of writing) tech leaders and thinkers, including Elon Musk, Steve Wozniak and Yuval Noah Harari.
The letter asks some very important questions: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, out-smart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to un-elected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.
What these leaders want to avoid is what some call a ‘technological singularity’, a hypothetical point in time where technological growth becomes uncontrollable and irreversible. According to one of the signatories, “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.” If that’s not a recipe for disaster, you’re in the wrong kitchen.
Some creative practitioners have already been burned and are seeking redress. The most notorious case is Greg Rutkowski, A Polish gaming illustrator whose style is apparently the most popular prompt on the internet. While famous artists such as Michelangelo, Picasso, and Leonardo da Vinci typically bring up around 2,000 prompts each, Rutkowski’s name has been used almost 100,000 times. As an article in the MIT Technology Review dated September 16, 2022 reminds us, the neural networks that power these generative AI algorithms are trained indiscriminately on the millions of images available online at sites like ArtStation, Deviant Art, Getty Images, Shutterstock, and others without the permission of the sites or the artists. Essentially, an AI image generator takes an artist’s work and ‘repackages’ it in a consumer-friendly form, without compensating the original artist.
Jos Avery/Midjourney
Software engineer and avid photographer Jos Avery generated some controversy and an explosion of followers when he created a series of portraits as an experiment to see if people could actually tell that these were not real photos. “They are not, but some photographers thought they were, and when they started asking what kind of equipment was used, Avery came clean and admitted they were in fact created in AI. The ensuing debate clove his followers into two camps, one calling them ‘fake’ and the other praising him for a job well done. The fact that Avery spent hundreds of hours refining the raw AI files in Photoshop is not much different than what a photographer does in post with ‘real’ photos. You be the judge.
Illustrator Liz DiFiore, president of the Graphic Arts Guild, claims artists’ wages have been declining for years, primarily due to infringement. If left unchecked, generative AI could reduce what is already a low median salary to almost zero, so artists have begun to try to get companies to delist their work from AI training modules. According to Artnet, artists Mat Dryhurst and Holly Herndon recently launched Spawning, a tool that allows people to set permissions on how their style and like- ness can be used by machine learning.
Commenting on the Getty lawsuit in a January article in The Verge, Andres Guadamuz, an academic specializing in AI and intellectual property law at the UK’s University of Sussex, had this to say: “I think there are ways of building generative models that respect intellectual property. I equate this to Napster and Spotify. Spotify negotiated with intellectual property rights holders—labels and artists—to create a service. You can debate over whether they’re fairly compensated in that or not, but it’s a negotiation based on the rights of individuals and entities. And that’s what we’re looking for, rather than a singular entity benefiting off the backs of others. That’s the long-term goal of this action.”
As for the illustrators’ legal challenges, experts say they will be facing an uphill battle. Precedents are few. Plaintiffs will need to conclusively prove the similarity between fakes and the real thing, and no artist has copyright protection on style. “In a perfect world, when it comes to copyright issues”, speculates Shane Saunderson, “all of these AI models should basically give you access to their database. You should be able to search the databases of any one of these models to see what they’ve created. And if you want train on a bunch of stock photography on getty.com, that’s fine. Pay Getty a royalty fee for that.”
Creative Collaborations
Some folks seem to agree that if you can’t beat AI, join it. Cognitive scientist and UX designer Don Norman, author of The Design of Everyday Things, views AI as a great collaborator. In a recent post, he mused, “Remember that the ‘A’ in AI means artificial. AI isn’t intelligent; it’s pattern matching and it’s doing things, but it has no deep understanding at all. I’ve tried using it. I’m not so good at sketching, so I ask it to illustrate my concepts. And it works reasonably well, except it never gets it right the first time. I have to say, here’s a picture I want. And then look at it and say that’s not quite what I want- ed. And then I have to figure out a way of describing what I do want and then go back and forth. And so it’s much more of a collaboration, a dialogue.”
Derek Shapton, Westside Studio/Midjourney
Professional photographer by day, AI jockey by night Derek Shapton has been experimenting with Midjourney in a very unique way. His prompts are haikus, written by Shapton himself. He decided to write his own as opposed to using, say, the haikus of Jack Kerouac in order to avoid any possible copyright infringement, something he is very careful of. While the original prompts are haikus, he relies on a number of secondary prompts which he is now trying to protect, like a secret recipe. After creating one of these images he then uses Photoshop to achieve the final aesthetic that he wants. Below are a few examples of images and their prompts. Needless to say, to get quality output and shape it to match you aesthetic requires many additional hours of work.
“It’s a collaboration because we think of the idea and then we have to judge what the AI produces to say whether that’s at all what we thought of. And sometimes it will produce something that’s so weird and strange and we sit and look at it and say, oh, wow, I would never have thought of that. And then you can spend a few more days shaping it. So we’re going to have to learn to think and design in a very different way. That’s true of every advance in technology over the ages. Every time a new technology comes in it changes the way we behave and most of the time in positive ways. It takes a while for us to get used to it. And that’s what I think will happen with AI.”
That’s one way to look at it. But does AI offer deeper, more profound challenges to the way creatives are accustomed to working? Iain Tate thinks so. “I think the one thing that creative industries have never really got their head around is how things like open source and GitHub and this way of people working together is the reason why software has eaten the world.”
He points out that in the software industry, developers are encouraged to build on each other’s work, but in the creative industries, if you borrow from someone else’s work, it’s considered a crime (even though everybody does it all the time) against creativity. Says Tate, “If the creative arts could end up behaving more like software, and I could say, hey, Will wrote this really cool piece, and I’m giving him full credit for it, but I’ve added to it and made some new stuff and, you know, aren’t me and Will working together well, even though we’ve never met each other, I’d be interested in that.”
Yaron Meron, who teaches in the design department at the University of Sydney, recently released a paper in which he examines the idea of collaboration at an even more fundamental level. Says Meron, “The reason I wrote the article is that there’s still this disconnect between designers and software engineers. We’re not talking to each other. And to a large degree, it’s our own fault for not putting ourselves out there and writing and saying what it is we do as graphic designers and communication designers.”
Maybe if creatives were more involved in the design of generative AI, we wouldn’t be embroiled in lawsuits stemming from the mutual lack of understanding between us and the engineers. Had creatives been more present during the development of Stable Diffusion, for example, engineers might have been alerted to the potential threats to artists’ livelihoods, and we would have a platform that has built-in protocols for permission and compensation.
Zach Bautista, Geoff Baillie, Rethink/Dall-E 2
Collaborating with both Open AI and consumers, Creative directors Zach Bautista and Geoff Baillie of Rethink got early access to Open AI’s Dall•E 2 to create an initial flight of prompts and images, then posted those and asked consumers to send in dozens of new search prompts, which were then used to generate more and more images. Aside from posting on Instagram, many of the resulting images became the raw material for print ads, posters, TSA’s and videos. After completing the experiment, Bautista and Baillie could see the potential for adding generative AI to the creative toolkit and using it in future work.
Bringing Together Both Sides of the Brain
The March 29th open letter calling for a pause while we all work together to figure out how we can design AI to be less dangerous, less threatening and less biased is an opportunity for what Julio Ottino, outgoing Dean of the McCormick School of Engineering at Northwestern University calls “whole brain engineering”. It is a pedagogical approach that combines the quantitative, analytical and logical skills commonly associated with engineering with the qualitative, metaphorical and creative skills associated with artmaking.
In the recently published book The Nexus, on which Ottino collaborated with designer Bruce Mau, this kind of cross-silo thinking is deemed essential for solving the wicked problems the world faces today. As the book states, because the world “faces enormous challenges of unprecedented complexity – problems that intertwine in a dizzyingly interconnected, interdependent and changing landscape”, we must “adopt new ways of thinking and working that cross the boundaries of classical knowledge…at the nexus where art, technology and science converge.”
As wicked, intertwined, complex problems go, AI is a prime contender, so if ever we needed some whole-brain engineering, it’s now. Looking down the list of signatories of the March 29th letter, there are lots of scientists, engineers, and Silicon Valley entrepreneurs, but one is hard pressed to find artists and designers. Is that, as Yaron Meron suggests, because artists and de- signers are too shy or insecure or uninterested to assert themselves? It might be time for us creatives to get much more involved in this conversation, well before the robots really do replace us.
This story originally appeared in the Summer 2023 issue of Applied Arts magazine. To subscribe, for just $19.99 a year, click here.