So I’ve had a week to play around with my new AI toy, and it’s fair to say, I’ve thought about and done nothing else. It’s really exciting. So I thought I would share some of my thoughts. Because it’s nothing but interesting. Or at least to me.
So if you’re interested, get yourself a nice glass of iced tea, get comfy because this might take a while. But it’s well worth it. It should expand your imagination and possibilities of what it can do and what I can turn into book covers for you.

As a little aside, all the images you’re going to see throughout this verbose diatribe are generated using the AI too.
How it is to Use
Firstly, I think the best metaphor I can give, to explain what it’s like is: it’s like having 10,000 robot painting artists at your beck and call. Unfortunately these robot painting artists are like unruly toddlers, they’re forever bumping into the furniture and falling over, so you have to guide them. And even then they won’t do what they exactly want ‘at the moment’ — more on that later.

Secondly, it’s SLOW! It’s quite an arduous task to get something half decent out of the machine, it takes a long time to do renders. Once you put in some information you get 4 variations and then you can do four variations on any of those variations, on and on. Until you hit on something useful. And you might have four or five iterations before you get to a good image, and each step needs processing and takes about 4 or 5 minutes, depending on how busy it is and what time of day it is.

Thirdly, it’s expensive in a way, because to speed this up you can set it to FAST mode but you only get 15 hours a month of fast mode, even if you pay for the top tier, which I am. And then it’s €4 per extra hour. Which is going to add up quickly. To put it into context I’ve already used 3 hours in fast mode of my 15 this week, and 90% of the time I’ve been running it the slow, sort of free, mode. And the fact that I already pay €250 for ShutterStock. I sort of don’t want to wham up the same sort of bill for the AI. But I think this will change at some point down the line — more on this later too.

So to put into context of my normal working days, yes you can get it to do great images but it probably takes about 60-75% more time to actually do the work. I know this because I’ve done four commissions this week using it. But each of those four clients were really taken with the final results. And when presented with the AI examples or Stock Image examples they went for the AI version every time!

And it’s about the same amount of extra time (60-75%) to generate good images for pre-made covers, as it is to look for stock images. But at least we get some really interesting stuff. And I’m bored, bored, flippin’ bored of Shutterstock and I’ve rinsed it to death, after making over 20,000 covers.
How Useful is It?
Well you’ll see from next week’s pre-made covers, it’s pretty damn good! But only once you guide it down certain paths. In fact, I’m really enjoying it. I think the biggest advantage is that once you find styles that hit, you can use it pretty well to make stunning images. So back to the ‘10,000 robot painters’ way of looking at it. Yes, you can get it to paint in the style of most famous artists with ease, here’s a list of all the artists it can and can’t emulate. In fact, you can also do remixing with it. So there is one werewolf cover in next week’s offerings which was a Klimt / Warhol remix.
But here’s the rub, it generated a whole load of garbage before I got to those ones. I mean lots of stuff that didn’t make sense at all. Here’s some examples. It had a tendency to be all over the place a lot of the time. It makes a lot of images where it puts legs, eyes, body forms in totally the wrong place. This I guess will get better. But it’s rather frustrating and time consuming with some things.
Here’s what I had to go through:




So useful yes, amazing images yes, but 90% of the time goes into trial and error. I guess over time I’ll get better at working out my commands I want to use. In fact, I’ve started building some of my command lines already. Also in the public channels in the discord (yep, it’s a discord bot) I’m looking at what other people are using. Which is really helpful to experiment with. I’ll give you an example of a command line that I found that I quite like so you can see what I mean:
in the style of Hyperrealism, in the style of modern futurism, in real life, NVIDIA RTX ON, RYZEN AMD GRAPHICS, Octane Render, blender render, award winning photograph, trending on art station, James Cameron CGI, National Geographic photo of the year, High quality lighting, stage lighting, award winning cinematography, r/aesthetic Top This Week, Canon EF-S Macro 35mm, BluRay, iMAX, photorealistic, photogenic, Ultra settings, Quantum dot display, super-resolution, bullet physics engine –q 2 –ar 5:8 –stop 95
Yeah pretty out there, right? So there is a lot of trial and error to understand what it understands and what you can make it do from its dataset. And speak to it in its own language.

But I’m not one to shy away from learning. In fact, learning is one of my favourite things!
What Bad about AI in Practical Terms for Book Covers?
There are a couple of things going on here:
Series Covers: When it comes to series covers, and I know a lot of you authors love this sort of stuff. This is a bit harder to generate more images at a later date, because generating something down the line might not match the original book cover. So it’s way easier to do at the time because you can get it to do variations. I guess if we decide on a style then this is less of a problem. You’ll actually see from some of my premade covers this week that I have done series covers. So it’s not impossible. But I need to remember the style for each, or at least note them down. So sort of possible, just a little harder I guess.
Print Covers: You can output images at various aspect ratios. So you can do an image at 5:8 but what it means is that I’ll probably have to be a little clever to make sure that I can do a print cover of that image. I’ll always find design solutions to that problem. But it was a little easier with a stock image because I’d always crop with a bit to the left of the image for the wraparound. But I’ll find solutions.
PG-13-ish: The other thing that the AI I’m using does, is content moderation. So it won’t do words like ‘sexy’ or ‘blood’ or ‘shower’ or ‘entrails’ or ‘naked’ or ‘boobs’. But it will do ‘dark red liquid,’ so you know, there are sort of ways around it.

And when I put ‘woman with shapely bristols’, did it understand this anachronistic euphemism? Who knows? She seems to have three ‘bristols’ in the bottom left. But it’s not a banned word. But these images ‘seem’ erotic in a certain way.

So yeah, too sexy erotica, or too graphic horror is off the table.

No Good for Veruca Salts: There is a certain type of author that gives me a whole list of attributes for their main character and the scene we find them in, because they have a very specific image in their mind’s eye. What the AI is not very good at is following a massive list of instructions that follow this. It just doesn’t happen. It gets confused easily. It’s not good at perspective once you add more than one element and one background element. Actually it’s not that amazing at creating bodies yet. Or space ships for some reason (or at least I’ve had problems thus far). Oh, and horses it’s terrible at. As you can see below. This might improve. But at the moment it’s better to approach it with an open mind and have a single element and background in mind and see what happens.

What Good about AI in Practical Terms for Book Covers?
But once you open your mind to what it produces, that’s where it gets very interesting. And it’s quite good at a number of things that are utterly vital when it comes to a good cover.
One Focus: I always say a cover should be simple, strong and have one focus, and as long as you play nice with the AI, and don’t over egg the pudding it comes back with really good, simple and focused results. It’s good at putting single characters or single items in an integrated scene so you see them.
Semiotics: Here also is why I really like using this AI tool and the odd results it produces. Because it’s based on a neural network it stores and thinks in concepts. What it understands is how one concept connects to another concept. It understands things like scary, dark, happy, angry and the things that we as humans understand as those concepts / symbols. So when you ask it to mix an ‘object’ with a ‘concept’ it has its best go at it. And sometimes it produces things you as a human wouldn’t produce because your neural network (i.e. your brain) immediately goes to the cliché, simply because it’s the quickest shortcut in your own head. But what the AI does, is it subverts that cliché because it takes more roundabout pathways to get there. But as a human we still understand what it means because it’s connecting with us on a semiotic level. We understand A to B, but we also understand A to B via C or D or E or even Pi. And the AI does that. For me this is killer when it comes to a cover design. It’s close enough to what we understand …

Intriguing: … and far enough away from what we normally see for us to spot a break in the hum-drum patterns of the style of book covers we’ve seen a thousand times. It intrigues and to me this is interesting to me. In fact, the best way I can describe the feeling I’ve had this week is that my brain has slightly capsized. Because it sort of produces these subconscious, dream-like pieces of work. Things I understand but at the same time have never seen before. Ever. Or ever will again. It’s a very strange feeling for a designer. If you spend time with someone or something that thinks completely differently to you, you always come away feeling somewhat changed and confused.
Uniqueness: And here’s the rub, it produces different things all the time! Always. You can get it to produce on and on with the same commands and it’ll be different every time. So there is absolutely no way anyone will ever have the same book cover. Because that image is completely unique, unlike images that you find on stock websites. So you know that the book cover is going to be original which is a complete bonus for people who like that idea.
Eye-catching: Another thing that it’s really good at, given the right set of instructions, is interesting colour palettes. Colour schemes you wouldn’t normally think of that really match well. It seems to have its own sensibilities. And very much understands things like ‘pastel palette’, ‘dark brooding palette’, ‘neon palette’ and never comes up with dodgy results. Which is really wonderful. Good colour is what I’m drawn to and what potential readers should be drawn to.

Emotions: So I started using this one this week and it seems to be a very emotive painter and that’s what a good book cover needs: emotional resonance, a mood, something that draws you into investing the story. So it’s great at that. Tick. Tick. Tick.

Cross-genres: I have absolutely no idea if people are writing shifter paranormal stories any more at all. But it’s something I really didn’t get into designing in terms of premade covers simply because there weren’t any stock images available and I’m not going to spend a whole day doing photoshop work to make one picture of some handsome bloke with tiger skin for €40! It just didn’t make any financial sense. But the AI is really good at doing this sort of mixing two concepts together. So things like: neon blade runner / noir crossover; native American / ghost story; vampire rockstars; etc., on and on and on.

It’s amazing at mixing concepts. Not so good at composing lots of elements unless you want them mixed. For example one of the first things I asked it to do just for a laugh was a computer completely made out of mouse skulls. No idea why. This is what it came up with. So let your imagination run wild on your genre crossovers.

Good at Utter Nonsense Commands: If you want something truly random you can put in statements like ‘the evil that lurks in all men’s hearts’, ‘how to dance inside your own head’, ‘when you’re lost find yourself backwards’ and it truly gets confused enough to come up with some really interesting results. So on those three concepts this is what it came up with:



So yeah very much liking what it does at the moment.
So let’s see if any of my pre-made book covers that are coming this week will actually sell. I guess that’ll be the test!
So if YOU like what it does let’s play.
What does the Future Hold?
So as promised early at the start of this longish set of info, here’s a few things about where I see it going. Because I’ve not just been playing, I’ve been thinking. I’m one of those introspective sort of chaps.

Feedback Loop: Obviously this whole thing is based on machine learning. So as people use it more and more it’s learning more and more. So in terms of getting things wrong, like doing odd perspectives or bodies that have limbs in the wrong place, this will improve. Let me explain why. When humans create their variations at the start, humans are going to pick the ones that look the most correct, the more they do that, the better the AI becomes, the better results it produces, the more humans will pick the even better results. It’ll end up in some sort of exponential improvement over time. That’s just the nature of the beast. This is interesting to me. I honestly think in 2 or 3 years the results it will produce will look utterly different from how it outputs today. Today it’s just a toddler. It’s in its infancy. But you can still get good results out of it. So that’s nothing but good. Things will look more natural for sure!

Quantum Computing: This thing feels like it’s sitting on normal cloud processing somewhere or not really being given any great bandwidth on Sycamore, which is the quantum computer that Google runs. So it’s pretty damn slow. But quantum computing is here. Sycamore interestingly is part of Google’s AI division. Why am I banging on about quantum computing? Well, to me that’s interesting, so Sycamore is a 53 Qbit computer which means it’s X to the power of 53. Or in simple terms 9 million-trillion times as fast as normal computers because it’s running on quantum levels. It’s totally mental when you think about it. But the metaphor I use here is dial-up internet, that’s what this AI feels like at the moment. It’s slow. But eventually fibre came along twenty years later. And I think quantum cloud computing is in its equivalent infancy. The more the two things work hand-in-hand the faster this will all feel. If you’re interested in that sort of stuff go have a read up on it. A fantastic book is Scary Smart by Mo Gawdat. But yeah, I honestly didn’t even know how much of the future is already upon us.

My Job: Yeah, I think I’m going to be somewhat screwed somewhere down the line, maybe in 5 or 6 year’s time you’ll have book cover AI services that are wonderful. I can see it coming over the hill. I mean I feel for all the illustrators out there that are suddenly somewhat fighting a losing fight. But for the moment at least it’s me still doing the donkey work with the design and creating these images with it for you to make covers. So I guess I’m safe. For now.

But I, for one, welcome our new robot painting overlords. For the moment. They’re really fun!
Interesting times.
So if you read this far, I hope you found this somewhat interesting and if you want to have a chat with me about it or ask questions, I’m always open to that. Just give me a shout on humblenations@gmail.com
Right back to dealing with all these unruly painter children and getting them to paint me something beautiful to put on your covers.
James,
GoOnWrite.com
garilynn
/ July 31, 2022What A.I. program do you use? Love your work!
humblenations
/ July 31, 2022A couple of the main ones went from closed beta into open beta in the month of July. So just go have a look for yourself at what’s out there. They all seemed to come out at the same time. Go have a look for yourself and see what suits: https://rigorousthemes.com/blog/best-ai-image-generators/ but yeah I have my favourite.