👍
I use gen AI tools to create and post images that I think are beautiful. Sometimes people agree with my selections. 100% of my electricity comes solar power. 🌤️
👍
Just got the new 0.4.2 testflight version. That is a very long list of updates and fixes! Thanks for continuing to make improvements.
After a careful review, I believe you are correct. I’ll take the matter up with the clothier.
I never use models trained on specific people or ask for celebs in my prompts. This is the base Flux model plus a Gil Elvgren style model. I can see the resemblance though, but it wasn’t something I was aiming for.
Why not both? 🤷
I think some of both. I gravitate towards the more fashion photography models/loras first, rather than start with the porn models. I also prompt for an assortment of expressions and nationalities, and I do go into detail on the hair length/color/style, and I think that does a lot for generating a wide variety of faces.
Some of that is also basic curation. I generate a ton of images, then look at it as a photography shoot, and select the ones that grab my attention more.
TY!
Yeah, Flux is amazing with hands. I still get a few gnarled fingers and toes (I did get one gal with foot hands that was disturbing), but on the whole, it’s worlds better than the Stable Diffusion models.
These look like modern day Art Frahm pinups. Digging the oil paint aesthetic!
In the Expanse books, there’s a planet called Auberon that has an 8 hour rotation, so 4 hours of light and 4 hours of dark. They decided that “1 day” would be light-dark-light and “1 night” is dark-light-dark. It’s really interesting how they describe the way society adapts to the cycle of having a midnight sun and both a midmorning and evening sunset.
I thought i had a good system where each outpost was only exporting 1 solid, 1 liquid, and 1 gas. This allowed me to isolate and sort at the receiving outpost.
The problem occurs when each outposts import & export containers get full. At which point materials flow from the export station, go to the import location where they can’t be unloaded, THEN THEY COME BACK to the original outpost where they get offloaded. You wind up with all the same materials filling both the import and export containers. Now the entire material flow is completely borked, nothing is getting imported, and all you have access to is the stuff thats locally produced.
There’s not much out there on training LoRAs that aren’t anime characters, and that just isn’t my thing. I don’t know a chibi from a booru, and most of those tutorials sound like gibberish to me. So I’m kind of just pushing buttons and seeing what happens over lots of iterations.
For this, I settled on the class of place
. I tried location
but it gave me strange results, like lots of pictures of maps, and GPS type screens. I didn’t use any regularization images. Like you mentioned, i couldn’t think of what to use. I think the regularization would be more useful in face training anyway.
I read that a batch size of one gave more detailed results, so I set it there and never changed it. I also didn’t use any repeats since I had 161 images.
I did carefully tag each photo with a caption .txt file using Utilities > BLIP Captioning in Kohya_ss. That improved results over the versions I made with no tags. Results improved again dramatically when I went back and manually cleaned up the captions to be more consistent. For instance, consolidating building
, structure
, barn
, church
, house
all to just cabin.
Epochs was 150, which gave me 24,150 steps. Is that high or low? I have no idea. They say 2000 steps or so for a face, and a full location is way more complex than a single face… It seems to work, but it took me 8 different versions to get a model I was happy with.
Let me know what ends up working for you. I’d love to have more discussions about this stuff. As a reward for reading this far, here’s a sneak peek at my next lora based on RDR2’s Guarma island. https://files.catbox.moe/w1jdya.png. Still a work in progress.
Appreciate the details. Some interesting models in here that I’ve not seen before. Will have to give them a spin.
not OP, but you can download the original png from the catbox link above, then drag it into SD WebUI’s PNG Info
tab.
Really loving the work you put into this app.
On the profile header, could we get an option to have the more expanded appearance? I like seeing the banner art when people add that to their profile, but the new gradient overlay and position of the avatar and username obscure a lot of it now.
I tried to leave a tip, but the testflight version indicates the transaction would not be real. Is there a way to do that and still use testflight?