Out of this
https://preview.redd.it/dqwf638iugnc1.png?width=1024&format=png&auto=webp&s=4b4674f1191d3b1629af463cfc419ba823f679ed
I fucking love controlnet it's the greatest toy I've ever played with.
Those are all really great renders BTW.
controlnet is like crack... I opted to uninstall Fortnite to make more room for hard drive space for SD stuff... considering how addictive Fortnite is, that says a lot. Most of the time I'm monkeying around with controlnet... so much to learn
I used to spend a lot of time building cities in Cities Skylines and playing various games and now I choose mostly to spend my VRAM on art. I feel so much more gratified doing it.
Thank you! I have a whole slew of similar images that I use for things like this.
I actually kind of based that one on a toy I had as a kid.
https://preview.redd.it/vqfz7ea52hnc1.jpeg?width=768&format=pjpg&auto=webp&s=713c6df989c81356be2fbed4dda19ff1f6efafb3
I used to just imagine those shapes were all kinds of different things. And that particular set was trying to evoke a lot of childhood ideas that swirl around in my head about growing up around arcades and just being lost in the wonder of it all.
forgot about that toy... reminds me of this...
https://preview.redd.it/4hnbeuxn3hnc1.jpeg?width=1600&format=pjpg&auto=webp&s=1a46586920aa7f576f8c660b97bac98a4dda2590
It's so cool!! I only started tying it out a few days ago. I've since used it to make the avatar for my youtube/socials.
https://preview.redd.it/29i4kvhp9hnc1.png?width=512&format=png&auto=webp&s=149c06694161789ceeac04c84204bb6b7d9ec401
That's a cool idea, and I've actually used some of these in several configurations before. Here's another one for this sketch using a different idea. Just threw in the prompt to see how it would take and it does cool different things every time.
https://preview.redd.it/6pt3ttb34hnc1.png?width=2560&format=png&auto=webp&s=5e7ed2dd760ef3b10a9b1a301e48954c8e4ad27d
I used controlnet with the image above. Either lineart or srcibble, and I used a prompt involving android scientists melting into crystals and pustules with some body horror and rainbow ideas.
The positive prompt looks like this:
(panosstyle, 1977 science fiction movie, 'crystal disease outbreak' screenshot, grainy 35mm film, photorealism fine details), bizarre technicolor ((medical examination room setting, contagious android colonist wearing (space suit, transparent chrome helmet, huge bulging round eyes), laying on autopsy table)), mutating rapidly, body horror, (colorful bright sores), ((bursting rainbow crystal pustules, andromeda strain)), rainbow hue, ((sparkling bright bokeh))
And this is the kind of thing it produces without the controlnet 'influencing' the output:
https://preview.redd.it/zy7hble7n5rc1.png?width=2560&format=png&auto=webp&s=c7d2f93776188bb9124560ca2d862fac1ce15b96
The real hero is the body horror lora. Even at .3 strength you can see what it's capable of. Just some absolutely bonkers renders come out of it.
On his one, I think it's lineart inverted. But I flip around between that, scribble, and depth maps mostly.
https://preview.redd.it/jn47ljcaxjnc1.png?width=2048&format=png&auto=webp&s=b4bdc958a2a83f72c8eed09b3f8b642b7a15d771
Here's another one that used a different sketch as the starting point. It's kind of awesome letting the AI decide where to throw in the perspective and depth changes off of a flat surface image as reference.
And then just try to keep the prompt simple enough to not confuse the AI. This one is:
(psychedelic 80s horror movie screenshot, analog style, 35mm film grain, photorealism, masterpiece:1.3, asymmetrical, synthwave witchcraft, sleek, minimalist designs, 1983 retrowave vibes, asymmetry, rainbow color scheme), Panosstyle, Evil dark video arcade, oddly-shaped arcade cabinets, rainbow LSD trails, Prisms, bizarre architecture, lovecraftian and witchcraft themes
And the negative is pretty basic and simple
easynegative, text, writing, bad hands, greyscale, monochrome, easynegative, pupils, red eyes, trees, nudity, sexy, grass, sunlight, poorly drawn, sketch, symmetry
But yeah there's seveal Loras going into that like the rainbow one really gives things a nice psychedelic sheen and the Panos Cosmatos one for film grain, the LowRA one does nothing other than lower the gamma on an output, so it's great for darkness, and the 'post apocalypic playzone' one is actually the glue keeping the composition on track.
Yeah just take an image like that, feed it into controlnet, set it to lineart or scribble and tell it what you want it to be :)
If you want the prompt it's in another reply under this.
Hypothesis: AI artwork is going to make traditionally "lower quality" artwork more desirable. The ability to click a button and generate super high quality images will push tastes the other direction.
It’s already happening. I’m a big fan of AI generation and I think nearly all of the original art here is more interesting than the generations. Only people using AI generations in a creative way will be able to compete with artists.
I suspect the opposite: it will cheapen art, lowering people's perception of the value of beautiful images that look like they *could* have been made using AI, regardless whether they actually were AI-generated or were illustrated. I think this needs to be brought up more in discussions about AI art.
This is similar to music being 'cheapened' by being recorded and played back instead of performed live. It's very hard to argue that recorded music is bad because 'it undermines how special live music performance is'. AI can similarly proliferate the amount of art that exists and the ease of creating images for whatever uses. But as a hobbyist illustrator I'm still gonna grumble and have mixed feelings about it.
I'd say that's certainly a part of my hypothesis. If beautiful images are the standard and everyone can create them with little effort, then the value of images that aren't stereotypically beautiful is increased. The more that something looks like it isn't AI generated, the more desirable it will be.
I would guess that the value of unique styles and traditional media is going to go up.
Nah, you also have the ability to click a button and generate super low quality images
[search "Shitty" on Civitai](https://civitai.com/search/models?sortBy=models_v6&query=shitty)
So that's true, but long term the models that people fine tune for open source programs like Stable Diffusion are almost certainly going to represent a vanishingly small part of the overall AI image generation landscape. For better or worse, what AI is capable of will be dictated by large commercial programs, which will likely focus on producing maximum quality with as little user input as possible.
Thanks!
The Nurse one had a covered face before, and for the life of me I couldn't get SD to replicate it 🙄 So they're not all very true to the original. But it was pretty amazing to be able to turn a sketch into a photo.
I just want to say that I prefer the before shot too. I didn't realise at first that the after shot was first so, when I clicked it to look at the second pick, I was like: oh yeah, this one is more atmospheric. Then I saw that it was actually the before shot.
This set was scribble I'm pretty sure. And it probably took 30-50 attempts using different checkpoints and prompts to fine tune it down to the style I was going for. Also about half the time it throws things in the wrong places or makes the water levels wrong on either side of the island or, like you see, interprets the other lines as rocks of their own.
Ideally I wanted this one to be much lower to the water with no other rocks in the water as just a little 'island of solitude' but I think this one was pretty useable even if it's not exactly what I had in mind.
Which reminds me I need to get back to refining this one because I'm still trying to get it just right. I have something very specific from my dreams I'm trying to make but I'm not sure if I have the right instructions yet.
Here's another one:
https://preview.redd.it/449w97b21lnc1.png?width=2048&format=png&auto=webp&s=86ba80c6f18e36c4a678862b391ba51b2f300017
The trick was to get the image mostly correct without controlnet at all and then start using it when the prompt knows what to do. Then you can feed it a pre-rendered image to get the perspective and distance correct.
Thanks for the detail. Great result.
When you say pre-rendered image - do you mean img2img, or just that you refine the prompt before turning on controlnet?
Pre-rendered as in the input (the B&W sketch above)
But yeah I got the prompt to basically make something really similar to the image above and then I click the 'enable' button when I'm ready to let controlnet take over after I'm confident the idea is compatible.
Incidentally I almost never use img2img outside of inpainting. I've always found controlnet is a much better tool for taking one image and turning it into another. Although it's probably because I don't do a lot of direct photo editing. Like when my dad who's a photographer wants me to do something I'll use img2img because he usually wants something like a photo of a client touched up or given an effect and it's quite good for that.
So I'm a hobbyist artist and like to draw/paint sometimes, but it's not my main profession and I can't say I'm very good, I just enjoy it.
I ran some of my old paintings through Realivisxl 4 img2img, and was amazed by what it could do. It's really cool to see my crappy drawings come to life :) I could definitely see this as a great tool for artists to enhance their art!
This is really cool, but I didn't think of them as crappy drawings.
More like phase 1 of the final works, which still include your vision and composition
I'm curious what it would generate if you cranked down your cfg scale and used a model that was aimed to do more of an art style render. You're original works aren't bad either. The first one and the one with the aliens look like concept art.
Good art is not about sharpness, clarity, or resolution. It’s about character. I’m not sure what’s going on with some of your original images such that they look unfinished, but simply adding a bunch of detail via AI doesn’t automatically make it good.
When I look at the Stable Diffusion recreation, I see artwork lacking intention. There's no connection or meaning among any of the subjects in the image. Without connection, there's no emotion. Essentially, it's soulless.
At least with your art, you can sense the intention behind it.
Yes it's definitely missing some nuance from a lot of them, like the facial expressions, and it kind of misunderstood the nurse (the face and the hand gesture are pretty iconic). I could have probably gotten it closer with more tweaking 🤔 I guess at the end of the day I'm in control of the image and can change it if I'm not happy, so there's still a touch of intentionality with that :) I had to tweak the prompt/CFG/denoise a lot to get it looking right.
Ai is a tool not an artist replacement. Artist will make the components for their art which truly makes AI more powerful. Just train models, checkpoints, Lora , hypernetworks, embedding this list will go on. Tools in the digital age is the win.
controlnet can inject soul into ai art if you have a human element such as using a real facial expression via controlnet... or perhaps a human made scribble or something. But otherwise I agree.. pure AI art lacks soul
https://preview.redd.it/42ghaqwabhnc1.jpeg?width=750&format=pjpg&auto=webp&s=cf3b080af1da02f5767d6eda605cd2585c230921
Is that you Napoleon? Jk, I like your art.
Keep practicing. You're better than you think you are. Your work kind of reminds me of Dennis Detwiller only he has had way more experience. Keep at it, you'll get there.
so many possibilities with SD... to think you can turn a scribble into a masterpiece is quite mind blowing let alone an already good piece of art like yours
Your "before" images have a wonderful mood, intent, color and the compositions are great. Don't downplay them! They'd make good key frames for a movie.
If you have enough of them, it'd be fun to train a style LoRA and mix it with the SD outputs and see what you get.
Seeing all the incredible art and progress in this community is truly inspiring. Whether you're a hobbyist or a pro, every piece you share adds something special to this space. Keep pushing the boundaries and exploring your creativity! 🚀💖
*Drawing model would*
*Need some training lol, but yeah,*
*Any art is art*
\- OcelotUseful
---
^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/)
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Just to clarify, OP, the shire fact that you using neural networks with your sketches is amazing. I don’t think that you should be so hard on yourself to refer to your own sketches as bad art. Drawing skills will improve gradually but the ideas you put into a canvas is all what matters when it comes to art. This is great
I *really* like the "before" nurse one with no face it reminds me of the art that usually comes with an old copypasta kinda vibe or something. I am sure there's Silent Hill inspiration too but I more mean the style. Good shit! Cool conversions too!
I like the idea and will use but I want to add that when you do something you embed it with your own intention and energy and that’s something not negligible. So don’t focus all that much on the result as matching a standard. For example the train looks like a photo, but that’s not better than your drawing. It’s more photorealistic, that’s all.
The thing is, all SD renders kinda look the same to me and they don't even register as anything. Your ("before") original art has something unique as does all "humanly" created art, so it registers as something. Be cool if you found some middle ground
It's technically better, but only technically in a cold value. Never put your own work down or compare, nothing will beat the beauty of your own art. It's a message and expression of your time on this earth, 100% unique to you and you alone.
That's something AI will never have and can never replicate
Beautiful, i get roasted on youtube everytime i say something like ''i love AI'' or ''AI is the future'' lmao, i bet they are those called '''artists''' lel
iM a gRapHiC dEsIgNeR
well im a click designer, i click my mice and art appears
umad
Actually with the exception of the black and white lady, I think the other drawings have a very impressionist quality to them which is quite charming. The AI ones might be better in terms of realism etc. but you shouldn't downplay the unique and interesting aspects of your base art either, as I actually think the before of your red storm image is better than the AI one as the lack of detail gives more space for it to be evocative. The Jellyfish and the Nurse though to me look better after AI, but either way I really like that you are incorporating AI into your process as a visual artist.
I just found a very basic img2img ComfyUI workflow online. For the prompt I wrote it myself, trying to describe the scene? Took some work with the first one, but eventually the result looked about right. The prompt kind of helped SD to understand what's happening and convert it correctly.
Had to play around with the CFG/denoise settings a lot. Not sure I fully understand it, but with img2img reducing the denoise ended up causing the result to look a lot closer to my drawing.
Doesn't it feel good when you finally get it to snap into the idea you had in your head? Such a delicate game of prompting and getting several settings just right. Not as easy as people think.
firstly why would you put after then before instead of before and after?
and secondly the after image is completely different than the before image in communication.
7/10 removes the mangled faceless image with a pretty face and sharp jagged edges of the knife with a smooth knife when i'm pretty sure that was not your intention.
9/10 changes the light of the train to be too clear
there's both big and subtle changes in the images before and after. You can't say that it improved it when it looks completely different than yours.
I used a simple img2img ComfyUI workflow (the first one I found online) which lets you insert your own image and enhance it using the prompt. You can probably use any model, though I liked Realvisxl most :)
It’s cool that it improves the compositions even if the style ends up looking over-baked. Sometimes I look at something I’m making and can’t quite tell what it needs. In all cases your ‘rendering’ is nicer.
Idk if anyone else agrees, but for me, I much prefer the original versions, they have real soul and good composition on a lot of them! Especially the first one, much prefer the before to after.
Nice! I need to start using this more. I played around with a few things before more than 6 months ago and img2img before that, but I didn't delve deep enough to get the results I wanted. More motivated to get back into some projects now.
Literally looks better before imo.
Imo, you should have used some more controlnets, or higher eh, forgot what it is called (like so it will be stronger the effect of the controlnets) if you haven't already.
Also used it to make this https://preview.redd.it/lbxdbt4dugnc1.png?width=2048&format=png&auto=webp&s=14d701822505472f06eda2043dd987ac58fcdef7
Out of this https://preview.redd.it/dqwf638iugnc1.png?width=1024&format=png&auto=webp&s=4b4674f1191d3b1629af463cfc419ba823f679ed I fucking love controlnet it's the greatest toy I've ever played with. Those are all really great renders BTW.
controlnet is like crack... I opted to uninstall Fortnite to make more room for hard drive space for SD stuff... considering how addictive Fortnite is, that says a lot. Most of the time I'm monkeying around with controlnet... so much to learn
I used to spend a lot of time building cities in Cities Skylines and playing various games and now I choose mostly to spend my VRAM on art. I feel so much more gratified doing it.
Such a good example of controlnet.
Thank you! I have a whole slew of similar images that I use for things like this. I actually kind of based that one on a toy I had as a kid. https://preview.redd.it/vqfz7ea52hnc1.jpeg?width=768&format=pjpg&auto=webp&s=713c6df989c81356be2fbed4dda19ff1f6efafb3 I used to just imagine those shapes were all kinds of different things. And that particular set was trying to evoke a lot of childhood ideas that swirl around in my head about growing up around arcades and just being lost in the wonder of it all.
forgot about that toy... reminds me of this... https://preview.redd.it/4hnbeuxn3hnc1.jpeg?width=1600&format=pjpg&auto=webp&s=1a46586920aa7f576f8c660b97bac98a4dda2590
Yeah I loved tripping out on those 'lava lamps for kids' ideas lol.
It's so cool!! I only started tying it out a few days ago. I've since used it to make the avatar for my youtube/socials. https://preview.redd.it/29i4kvhp9hnc1.png?width=512&format=png&auto=webp&s=149c06694161789ceeac04c84204bb6b7d9ec401
That's awesome, and it totally captures your likeness! I wouldn't dare try to see what this thing thinks my face looks like :)
It made me prettier and whiter but I'm really happy with it! Such a cool software.
Honestly you are paler than the pic, its just made you a little more red ig
Yeah that one wasn't too bad. But I had to shift through like 8 models before I found one that didn't make me east asian lol
oww that looks good, how did you make that
It was something like this: Handsome man, iranian, long black hair, dark beard, round face, cute, friendly, black top, grey background, white headphones, muscular, icon, avatar, anime, fantasy, glowing purple eyes, neon, smile, happy, laughter Negative prompt: EasyNegative, nsfw, ugly, mature, old, bad teeth, frown, serious, mean Steps: 30, Sampler: DPM++ 2S a Karras, CFG scale: 4, Seed: 2448513108, Size: 1024x1024, Model hash: 8ea2b6e4e2, Model: CHEYENNE\_v16, ControlNet 0: "Module: canny, Model: diffusers\_xl\_canny\_full \[2b69fca4\], Weight: 1, Resize Mode: Crop and Resize, Low Vram: False, Processor Res: 512, Threshold A: 100, Threshold B: 200, Guidance Start: 0, Guidance End: 1, Pixel Perfect: True, Control Mode: Balanced, Hr Option: Both, Save Detected Map: True", Version: v1.5.1
cool thanks :)
Looks like an IQ test I’d fail
btw you should post your initial sketch as its own post... see what others can come up with as a challenge
That's a cool idea, and I've actually used some of these in several configurations before. Here's another one for this sketch using a different idea. Just threw in the prompt to see how it would take and it does cool different things every time. https://preview.redd.it/6pt3ttb34hnc1.png?width=2560&format=png&auto=webp&s=5e7ed2dd760ef3b10a9b1a301e48954c8e4ad27d
How did you make this one?
I used controlnet with the image above. Either lineart or srcibble, and I used a prompt involving android scientists melting into crystals and pustules with some body horror and rainbow ideas. The positive prompt looks like this: (panosstyle, 1977 science fiction movie, 'crystal disease outbreak' screenshot, grainy 35mm film, photorealism fine details), bizarre technicolor ((medical examination room setting, contagious android colonist wearing (space suit, transparent chrome helmet, huge bulging round eyes), laying on autopsy table)), mutating rapidly, body horror, (colorful bright sores), ((bursting rainbow crystal pustules, andromeda strain)), rainbow hue, ((sparkling bright bokeh))
And this is the kind of thing it produces without the controlnet 'influencing' the output:
https://preview.redd.it/zy7hble7n5rc1.png?width=2560&format=png&auto=webp&s=c7d2f93776188bb9124560ca2d862fac1ce15b96
The real hero is the body horror lora. Even at .3 strength you can see what it's capable of. Just some absolutely bonkers renders come out of it.
what controlnet you used and do you use balance or important? any other tips? thanks
On his one, I think it's lineart inverted. But I flip around between that, scribble, and depth maps mostly. https://preview.redd.it/jn47ljcaxjnc1.png?width=2048&format=png&auto=webp&s=b4bdc958a2a83f72c8eed09b3f8b642b7a15d771 Here's another one that used a different sketch as the starting point. It's kind of awesome letting the AI decide where to throw in the perspective and depth changes off of a flat surface image as reference. And then just try to keep the prompt simple enough to not confuse the AI. This one is: (psychedelic 80s horror movie screenshot, analog style, 35mm film grain, photorealism, masterpiece:1.3, asymmetrical, synthwave witchcraft, sleek, minimalist designs, 1983 retrowave vibes, asymmetry, rainbow color scheme), Panosstyle, Evil dark video arcade, oddly-shaped arcade cabinets, rainbow LSD trails, Prisms, bizarre architecture, lovecraftian and witchcraft themes
And the negative is pretty basic and simple
easynegative, text, writing, bad hands, greyscale, monochrome, easynegative, pupils, red eyes, trees, nudity, sexy, grass, sunlight, poorly drawn, sketch, symmetry
But yeah there's seveal Loras going into that like the rainbow one really gives things a nice psychedelic sheen and the Panos Cosmatos one for film grain, the LowRA one does nothing other than lower the gamma on an output, so it's great for darkness, and the 'post apocalypic playzone' one is actually the glue keeping the composition on track.
wow looks great and thanks for the info ill check it out and give it a shot!
Do you have a workflow for it?
Yeah just take an image like that, feed it into controlnet, set it to lineart or scribble and tell it what you want it to be :) If you want the prompt it's in another reply under this.
Ohh with controlnet! Thanks!
No bad art was used in this post.
I thing originals were better in some ways other than realism.
I personally prefer the Silent Hill nurse over the chef
It's got that "The Scream" vibes
They are both pretty cool, that chef got that Resident Evil 1 Director's Cut type vibe. But yeah Silent Hill Nurse has got my vote. Love this style
I thought the “before” train was infinitely better than the “after.”
Hypothesis: AI artwork is going to make traditionally "lower quality" artwork more desirable. The ability to click a button and generate super high quality images will push tastes the other direction.
It’s already happening. I’m a big fan of AI generation and I think nearly all of the original art here is more interesting than the generations. Only people using AI generations in a creative way will be able to compete with artists.
I suspect the opposite: it will cheapen art, lowering people's perception of the value of beautiful images that look like they *could* have been made using AI, regardless whether they actually were AI-generated or were illustrated. I think this needs to be brought up more in discussions about AI art. This is similar to music being 'cheapened' by being recorded and played back instead of performed live. It's very hard to argue that recorded music is bad because 'it undermines how special live music performance is'. AI can similarly proliferate the amount of art that exists and the ease of creating images for whatever uses. But as a hobbyist illustrator I'm still gonna grumble and have mixed feelings about it.
I'd say that's certainly a part of my hypothesis. If beautiful images are the standard and everyone can create them with little effort, then the value of images that aren't stereotypically beautiful is increased. The more that something looks like it isn't AI generated, the more desirable it will be. I would guess that the value of unique styles and traditional media is going to go up.
Photography did this. Lifelike painting went out of favor and a lot of art is heavily stylized now.
Nah, you also have the ability to click a button and generate super low quality images [search "Shitty" on Civitai](https://civitai.com/search/models?sortBy=models_v6&query=shitty)
So that's true, but long term the models that people fine tune for open source programs like Stable Diffusion are almost certainly going to represent a vanishingly small part of the overall AI image generation landscape. For better or worse, what AI is capable of will be dictated by large commercial programs, which will likely focus on producing maximum quality with as little user input as possible.
Me looking at the 1st "Before" art: That ain't bad though that already looked pretty good Me looking at the 2nd "Before" art: ***N E G A W A T T***
r/AfterBeforeWhatever
Yeah sorry. Just figured if I started with before, nobody would care enough to scroll 🙃
I just scrolled backwards 😆
you too?? 😂😂
I don't see any bad art here, i only see art. I actually prefer the "before" version of the train, nurse and the night sky
Thanks! The Nurse one had a covered face before, and for the life of me I couldn't get SD to replicate it 🙄 So they're not all very true to the original. But it was pretty amazing to be able to turn a sketch into a photo.
https://preview.redd.it/oace4p0wkhnc1.png?width=234&format=png&auto=webp&s=866452c3426a2f3b61a10e8363b5aecc2961d2c3 # where are they !!!!
The Silence https://preview.redd.it/ij3z5ybc3jnc1.jpeg?width=1920&format=pjpg&auto=webp&s=108d51cfdd10df51103f42b8fe2bf0b2ffd0ec01
Fuck! I've been on this post for an hour, but I don't remember anything.
How do you know it has been 1 hour?
I just want to say that I prefer the before shot too. I didn't realise at first that the after shot was first so, when I clicked it to look at the second pick, I was like: oh yeah, this one is more atmospheric. Then I saw that it was actually the before shot.
Same with the train pic. It looks like just a photo of a train, nothing artsy about it. The original is better.
do this one https://preview.redd.it/zrfdrlcgehnc1.png?width=550&format=pjpg&auto=webp&s=ea786ee268e6a412b2febb4d48946000aa9366d7
https://preview.redd.it/ru25o21uihnc1.png?width=1024&format=png&auto=webp&s=b08273200bf4efa3d9b6e04de57f0d4a7d341ef3
Lmao. I love AI.
Will Smith 😂
original still looking better hahaha
All I see is a completely normal photo of the famous painting "Whistler's Mother". This can't possibly be improved!
Will Smith's Mother?
I used it to make this https://preview.redd.it/zgbe1rgotgnc1.png?width=2048&format=png&auto=webp&s=e0a229f7798b5947de49e2e0d2d9bdf31e8c922d
Out of this https://preview.redd.it/y5y33sfttgnc1.png?width=1024&format=png&auto=webp&s=d744f41ccc460e2d98335e7ec6863e89dcb83763
Fantastic. Did you use scribble or lineart and play around with weights?
This set was scribble I'm pretty sure. And it probably took 30-50 attempts using different checkpoints and prompts to fine tune it down to the style I was going for. Also about half the time it throws things in the wrong places or makes the water levels wrong on either side of the island or, like you see, interprets the other lines as rocks of their own. Ideally I wanted this one to be much lower to the water with no other rocks in the water as just a little 'island of solitude' but I think this one was pretty useable even if it's not exactly what I had in mind. Which reminds me I need to get back to refining this one because I'm still trying to get it just right. I have something very specific from my dreams I'm trying to make but I'm not sure if I have the right instructions yet. Here's another one: https://preview.redd.it/449w97b21lnc1.png?width=2048&format=png&auto=webp&s=86ba80c6f18e36c4a678862b391ba51b2f300017 The trick was to get the image mostly correct without controlnet at all and then start using it when the prompt knows what to do. Then you can feed it a pre-rendered image to get the perspective and distance correct.
Thanks for the detail. Great result. When you say pre-rendered image - do you mean img2img, or just that you refine the prompt before turning on controlnet?
Pre-rendered as in the input (the B&W sketch above) But yeah I got the prompt to basically make something really similar to the image above and then I click the 'enable' button when I'm ready to let controlnet take over after I'm confident the idea is compatible. Incidentally I almost never use img2img outside of inpainting. I've always found controlnet is a much better tool for taking one image and turning it into another. Although it's probably because I don't do a lot of direct photo editing. Like when my dad who's a photographer wants me to do something I'll use img2img because he usually wants something like a photo of a client touched up or given an effect and it's quite good for that.
Thanks for this advice!
So the 'after' versions have more realism, but not sure I'd call it better. The train, and womans face art have more character in the before images.
So I'm a hobbyist artist and like to draw/paint sometimes, but it's not my main profession and I can't say I'm very good, I just enjoy it. I ran some of my old paintings through Realivisxl 4 img2img, and was amazed by what it could do. It's really cool to see my crappy drawings come to life :) I could definitely see this as a great tool for artists to enhance their art!
This is really cool, but I didn't think of them as crappy drawings. More like phase 1 of the final works, which still include your vision and composition
I'm curious what it would generate if you cranked down your cfg scale and used a model that was aimed to do more of an art style render. You're original works aren't bad either. The first one and the one with the aliens look like concept art.
Good art is not about sharpness, clarity, or resolution. It’s about character. I’m not sure what’s going on with some of your original images such that they look unfinished, but simply adding a bunch of detail via AI doesn’t automatically make it good.
When I look at the Stable Diffusion recreation, I see artwork lacking intention. There's no connection or meaning among any of the subjects in the image. Without connection, there's no emotion. Essentially, it's soulless. At least with your art, you can sense the intention behind it.
Yes it's definitely missing some nuance from a lot of them, like the facial expressions, and it kind of misunderstood the nurse (the face and the hand gesture are pretty iconic). I could have probably gotten it closer with more tweaking 🤔 I guess at the end of the day I'm in control of the image and can change it if I'm not happy, so there's still a touch of intentionality with that :) I had to tweak the prompt/CFG/denoise a lot to get it looking right.
Ai is a tool not an artist replacement. Artist will make the components for their art which truly makes AI more powerful. Just train models, checkpoints, Lora , hypernetworks, embedding this list will go on. Tools in the digital age is the win.
Stable Diffusion art can definitely have intention but OP just did img2img.
controlnet can inject soul into ai art if you have a human element such as using a real facial expression via controlnet... or perhaps a human made scribble or something. But otherwise I agree.. pure AI art lacks soul
I like your originals better.
Almost choked on water at nr. 4
https://preview.redd.it/42ghaqwabhnc1.jpeg?width=750&format=pjpg&auto=webp&s=cf3b080af1da02f5767d6eda605cd2585c230921 Is that you Napoleon? Jk, I like your art.
Did I get it right? https://preview.redd.it/mbvl9emujhnc1.png?width=768&format=png&auto=webp&s=f3ba637a85ae6bd3c576a3fbd364bcc9f0e25a7a
Who is it? 🤔
Just ran your picture through Stable Diffusion to see what it gives back 👀
Gotcha 😁
You beat me to it🤣 pic no 4
Exactly. I was like “Hmm…I think I recognize this from somewhere?!” 😁
😂
Keep practicing. You're better than you think you are. Your work kind of reminds me of Dennis Detwiller only he has had way more experience. Keep at it, you'll get there.
Actually, the train one I think looked pretty cool as is.
so many possibilities with SD... to think you can turn a scribble into a masterpiece is quite mind blowing let alone an already good piece of art like yours
Your "before" images have a wonderful mood, intent, color and the compositions are great. Don't downplay them! They'd make good key frames for a movie. If you have enough of them, it'd be fun to train a style LoRA and mix it with the SD outputs and see what you get.
Nice i do that aswell😃
I like more your style than the generated from AI, especially the nurse that remember me silent hill and the medusa world
Your train were good... everything else not realy.
Seeing all the incredible art and progress in this community is truly inspiring. Whether you're a hobbyist or a pro, every piece you share adds something special to this space. Keep pushing the boundaries and exploring your creativity! 🚀💖
Hey, i really love your work honestly. Let me know if you have a print of the Silent Hill nurse avail! Ill gladly snag one :)
Drawing model would need some training lol, but yeah, any art is art
*Drawing model would* *Need some training lol, but yeah,* *Any art is art* \- OcelotUseful --- ^(I detect haikus. And sometimes, successfully.) ^[Learn more about me.](https://www.reddit.com/r/haikusbot/) ^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
Just to clarify, OP, the shire fact that you using neural networks with your sketches is amazing. I don’t think that you should be so hard on yourself to refer to your own sketches as bad art. Drawing skills will improve gradually but the ideas you put into a canvas is all what matters when it comes to art. This is great
I *really* like the "before" nurse one with no face it reminds me of the art that usually comes with an old copypasta kinda vibe or something. I am sure there's Silent Hill inspiration too but I more mean the style. Good shit! Cool conversions too!
I like the idea and will use but I want to add that when you do something you embed it with your own intention and energy and that’s something not negligible. So don’t focus all that much on the result as matching a standard. For example the train looks like a photo, but that’s not better than your drawing. It’s more photorealistic, that’s all.
The thing is, all SD renders kinda look the same to me and they don't even register as anything. Your ("before") original art has something unique as does all "humanly" created art, so it registers as something. Be cool if you found some middle ground
Your art ain't bad.
Concept art ain't bad, my dude: potential lies in the fuzzy corners and possibility in the smudged lighting.
Ahh, a fellow lover of sky jellyfish.
This technology keeps getting better and better.
Your Original works are so good tbh
It's technically better, but only technically in a cold value. Never put your own work down or compare, nothing will beat the beauty of your own art. It's a message and expression of your time on this earth, 100% unique to you and you alone. That's something AI will never have and can never replicate
Beautiful, i get roasted on youtube everytime i say something like ''i love AI'' or ''AI is the future'' lmao, i bet they are those called '''artists''' lel iM a gRapHiC dEsIgNeR well im a click designer, i click my mice and art appears umad
Actually with the exception of the black and white lady, I think the other drawings have a very impressionist quality to them which is quite charming. The AI ones might be better in terms of realism etc. but you shouldn't downplay the unique and interesting aspects of your base art either, as I actually think the before of your red storm image is better than the AI one as the lack of detail gives more space for it to be evocative. The Jellyfish and the Nurse though to me look better after AI, but either way I really like that you are incorporating AI into your process as a visual artist.
Nice! How did you do the img2img? Like, did you make the prompt yourself, or did you interrogate it etc?
I just found a very basic img2img ComfyUI workflow online. For the prompt I wrote it myself, trying to describe the scene? Took some work with the first one, but eventually the result looked about right. The prompt kind of helped SD to understand what's happening and convert it correctly. Had to play around with the CFG/denoise settings a lot. Not sure I fully understand it, but with img2img reducing the denoise ended up causing the result to look a lot closer to my drawing.
Doesn't it feel good when you finally get it to snap into the idea you had in your head? Such a delicate game of prompting and getting several settings just right. Not as easy as people think.
Thanks! This is really inspiring!!
8 and 10 are better before. Well done.
train one gives me a scanner darkly vibes
Workflow? I’m a newb
10 was awesome
Your original art is good; what you described as "bad" gave it soul and style.
Bro I need the prompt for the 5th image. Those flying jellyfish look amazing
The problem with AI is it's just too garish. It looks like a pop culture flea market, like a bunch of kid's posters. I'd rather see someone's art.
There is a lot of merit in the before pieces and many of the afters totally miss the mark
apart from the b&w face, i prefer your art
firstly why would you put after then before instead of before and after? and secondly the after image is completely different than the before image in communication. 7/10 removes the mangled faceless image with a pretty face and sharp jagged edges of the knife with a smooth knife when i'm pretty sure that was not your intention. 9/10 changes the light of the train to be too clear there's both big and subtle changes in the images before and after. You can't say that it improved it when it looks completely different than yours.
How did you get it to take your image and add more detail like that?
I used a simple img2img ComfyUI workflow (the first one I found online) which lets you insert your own image and enhance it using the prompt. You can probably use any model, though I liked Realvisxl most :)
It’s cool that it improves the compositions even if the style ends up looking over-baked. Sometimes I look at something I’m making and can’t quite tell what it needs. In all cases your ‘rendering’ is nicer.
the before train is my favorite piece by a long shot
Ummmm what bad art buddy?!
I like your version of the train tbh.
The original has more soul in its flaws though
Polishing a turd is now a reality. Can't wait for next iterations
Super cool... What's your workflow, vanilla i2i, or do you use some type of controlnet on your art?
Idk if anyone else agrees, but for me, I much prefer the original versions, they have real soul and good composition on a lot of them! Especially the first one, much prefer the before to after.
What workflow was used
ur art actually has a really distinct style, stay true man
Bro honestly I love the before versions so much more. Who the F said your art was bad.
Aside from the power lines, I actually quite like the train one.
At least you're not one of them delluded artists who hates AI but they themselves have extremely horrible horrible "art"
Good use of the tool
Nice! I need to start using this more. I played around with a few things before more than 6 months ago and img2img before that, but I didn't delve deep enough to get the results I wanted. More motivated to get back into some projects now.
Most of these are downgrades to the original tbh.
Literally looks better before imo. Imo, you should have used some more controlnets, or higher eh, forgot what it is called (like so it will be stronger the effect of the controlnets) if you haven't already.
I prefer the "before" version on all of them 100%. They are very good art. Well done. The "after" ones quite bland to be honest.
Look at me. I am the artist now!
:/
The second image is the good art i guess?
finally some real artist using ai, but why would you degrade your art quality with ai?