But then OpenAPI’s DALL-E 2 arrived, and the results I saw were alarmingly impressive at first. The new version can create anything from complex illustrations, which would take professional illustrators days or even weeks to craft or spit out, or a range of logo concepts in a matter of seconds, which ordinarily takes designers hours and hours of pixel pushing.
How does it work, exactly?
DALL-E 2 utilizes a ‘generative model’ — a simplified term for a sophisticated deep-learning AI that creates images from natural language. It achieves this by understanding the relationships between image elements, which allows it to connect them in progressively different ways.
I’ve given an example in the image above. For this AI-generated image, all I inputted were the words ‘The last selfie on earth, high quality digital art.’ There were still disparities in the images DALL-E 2 formed from my suggestion, but the connections between each visual element remain undeniably consistent with what I would expect to see.
AI products and the future.
Well, as much improved as DALL-E 2 is from its predecessor, what does the future hold for AI, and will it further evolve?
Just a few months ago, OpenAI revealed that DALL-E 2 was becoming available in beta, with around one million users invited to try the product. Users will own all the commercial rights to the generated images, and there aren’t image use restrictions for businesses or brands.
This news garnered a mixed reception. A commercially available AI product could spell trouble for working artists, creatives, and designers. Businesses like stock photography image sites may also be under threat. With this kind of product, companies can rapidly expand their in-house creative capacity, and marketing teams can save time and money.
The incredible capacity of AI also raises an important ethical question. How can we guarantee correct usage? And filter out potentially damaging realistic imagery or offensive content? OpenAI took a long time to make DALL-E 2 commercially available, and figuring out this significant challenge was part of the reason why.
Companies behind DALL-E 2 and other AI programs, and the businesses using them, must be wary of where, how, and when they rely on AI-generated content. AI is still in its infancy, and infants have lots to learn!
So, should we be frightened of AI? Or is it just a new tool to use?
Despite its youth, AI has been around for a while and will inevitably influence art, design, and even content production for decades to come — as is proving. Rather than viewing our relationship with AI programs as fighting a seemingly lost cause, the design community (myself included) should embrace these new capabilities and look for opportunities to harness the power of AI to enhance our collective output.
Broadly speaking, let’s accept the change, not fight it.
I can already see how this astonishing tech could fit into a visual artist’s workflow once it hits the mainstream — even though it’s still unclear how that might look. Should AI programs like DALL-E 2 be bought out by Adobe or Getty and fleshed out as just one of our day-to-day tools or not? I have to say, I’m warming to the idea.