Hullo! I’m Bela and the resident Eskwelabs’ Senior Brand Designer. I’m a nerd at heart that isn’t good with numbers but I find my strength and skills in brand and visual design. I have a passion for tech and education that leaves me curious about how us hybrid humans can interact with programs such as artificial intelligence to inspire others to do the same! Connect with me via Linkedin.
Welcome to “Curious Cafe,” a blog series of technological discovery mixed with a touch of my creative design process. Curious Cafe is where you (the reader) can grab a cup of coffee while reading, enjoy the information, and experience a caffeinated writer share about design experiments, case studies, and discoveries about basically anything in a 16 minute read.
Did you know that it takes at least 10 minutes before your coffee grows cold? So let’s get started!
“AI” or artificial intelligence is a simulation of human intelligence executed by machines, computer systems, and programs. AI works through learning and recognizing large amounts of labeled data to pattern how our human minds work. In this article, we’ll be talking about the latest innovations of AI on digital art. From recognizing images to generating detailed images called “AI Art.”
AI Art is basically an artwork or visual illustration generated by a program with a very robust and trained neural network that can turn text captions into images. The top two AI programs that have been subject to scrutiny and comparison between each other are DALL-E and Midjourney. These are the programs we used as a design case study here in Eskwelabs and the main tools used in this article.
In this day and age, technology is now embedded in our daily lives, affecting how our future would look like and what industries will be affected. In the past 3 years, many jobs have been replaced by automation and artificial intelligence. Websites such as “Will Robots Take My Job” show data on jobs that might be at risk of automation, and for the longest time, jobs in the creative industry were safe from automation risks. Multiple articles were released on “Why artists will never be replaced by Artificial Intelligence.” But that all changed when DALL-E and Midjourney were launched. This created an uproar online among different art communities and the main theme of their discussion was how they as artists can lose their jobs.
As a self-taught artist myself, I was not that excited about this news at first. One of the crucial reasons why I was skeptical was because I basically had to learn this new skill all over again to either be able to keep my job, compete with other artists out there, or worse, compete with AI. The last time mankind competed with AI based on some movie references, we had to beat Ultron and the Matrix—both difficult and life threatening. But I decided to give it a chance when I was watching Cleo Abram’s Youtube video “THE REAL fight over AI art” and in her experience with interacting with DALL-E, she said:
“It didn’t feel as if the AI was leveling the playing field between us. It felt like I was getting new skills and it felt like he was getting superpowers.”
In this short but impactful quote, Cleo shares about two realities—her reality and the one her artist friend had embraced. He had opened his arms to how AI could not only boost his existing art and design knowledge, but also augment his ideation and creation skills.
So I got to work from there. I had to unlearn what I knew about what being an artist meant in this day and age. As designers, I believe it is always up to us to find creative ways to solve a problem. And in my context, AI programs weren't the problem nor am I trying to plot how to shut these programs down. My next steps are about answering the question “How might I use them?” or “How can I design with AI?”
Come to think of it, different mediums and design techniques change faster than the changing seasons. 58% of our population are social media users, therefore we consume an ever-growing amount of media every single day. The media that we see competes for our attention. The way to win the game—to get people to pay attention—is by using strong creative visuals. But generating one strong visual after another can exhaust artists and their creativity if done too much.
In the past, we as creatives were only using pen and paper to sketch. Now we can use an iPad and a stylus. Today, design software and programs are always being invented and launched left and right. So you can just imagine if Van Gogh had animation skills at the time, he would have animated Starry Night with moving swirls just like what Petro Vrellis had done 10 years ago. The future of design is already here and there’s no denying that anymore so it’s up to us to grow, adapt, and of course, be creative about it.
We’re now entering the rabbit hole of research I’ve done to produce the following experiments that you can try with DALL-E and Midjourney.
I found out that I had a hard time describing what I wanted to see. So without prior research to prompt writing, I immediately just typed what I wanted and the results weren’t that great. To save your DALL-E credits, I would recommend learning more about prompt writing first.
Prompt Design plays a key role for the AI program to generate the images, and it is important for you to be clear with the descriptions of your main subject. But first, what is a prompt? A series of descriptive words that acts as an input for AI art generators. To us, it is just a clump of texts and words, but to the AI, it's what it actually sees.
Here are some of the existing prompt tips from various artists that I found helpful:
For the second experiment, I wanted to try bringing something fictional into life (well, virtual life). Often when we read fiction books, authors are the best writers to learn from when describing something we haven’t imaged before. An example of this is the concept of dragons, they are fictional creatures and it takes a whole lot of imagination for the author to describe them.
After DALL-E brought Christopher Paolini’s description of Saphira to life, I decided to try it out on Midjourney. I did a bit of research on the difference between Midjourney and DALL-E and the best way to really describe it is how Daniel Meissler described it, “Midjourney is like using a Mac, and DALL-E is like using Linux command line.” What I loved about Midjourney is that the images generated aren't confined to a 1024 x 1024 size but you can customize the aspect ratios that you want within the prompt. An aspect ratio can define the size of an image pertaining to width and height with 2 numbers separated by a colon.
Midjourney has an existing guide on how you can maximize your prompt. But here are other prompt tips by various authors.
Even though it wasn’t as accurate as the images generated from DALL-E, I was amazed by the variations it generated. It felt like Midjourney was showing me a glimpse of a whole new world and it sparked so much creative inspiration within me that I wanted to explore more of it. I decided to collaborate with Midjourney’s AI by generating a few more landscapes and castles. I combined them together in Adobe Photoshop and here we have a whole new world before us.
After collaborating with an AI for about a month and learning how to create prompts that are more accurate to what I had in mind, I finally had the chance to answer my question earlier: “How might I use AI art?”
Eskwelabs is launching a new product called Learning Sprints and this basically aims to redesign how learning is done in the future of work. Learning Sprints are perfect for upskilling and reskilling talented individuals in teams. The future of work is data, that’s why the focus of our current Learning Sprints is on equipping teams with data skills. At the same time, the future of learning is a connection, that’s why teams would be the ones who’ll best maximize the experience.
This is a new Eskweabs product I’m definitely excited about. What better way to incorporate the lessons I learned from my experiments with AI than using them for the Learning Sprints brand design!
There were a few keywords that I wanted to make sure I was able to include when I was writing the prompt, to ensure the “likeness” of how things are generated, and manipulated using Adobe Photoshop to create specific variations and hues.
Here is where I can justify Cleo Abram’s claim that the AI program gave me some sort of superpowers in the form of inspiring creativity and forming unique illustrations that I myself would not have imagined. I only needed 3 tools to produce this collection: Midjourney, DALL-E, and Adobe Photoshop.
In Photoshop, the “Content Aware Feature” and the “Clone Stamp Tool” play a great role when merging the flat generated images to either expand or remove certain components. Another trick that is helpful is selecting the color range of the AI generated images. I could use Photoshop to change the hues and tones to match the rest of the collection.
Check out our website to spot more of the AI generated images we used for our Learning Sprints and let us know in the comments section below if you’re interested to grab a copy of our prints.
This past quarter, we have been designing and delivering Learning Sprints to universities, government agencies, and organizations in the Philippines, and as a new year approaches, we are eager to share this dynamic and future-focused learning experience with institutions beyond our local borders.
Email us at firstname.lastname@example.org and we’ll take our online Learning Sprints to the country you’re based in and the institution you call home.
If you want to learn more about my AI design process, you can reach me at email@example.com.