1753888934... mp4
(1.77 MB, 718x1280 h264)
1753946910... mp4
(2.08 MB, 720x1280 h264)
Sharing some of what I picked up about AI (media generation) thus far:
There are closed models and open models. Tensor.Art, Civit.Ai and others sites like this use open models that have somewhat less polished results, but they don't censor (as much) and don't care about copyright (as much).
Everything you find in those sites can be installed/run locally, if you have enough video ram. (If ya'll want help running local things, we can dig into that rabbit hole, but its a whole dif hole).
The most relevant closed models are runwayml (for allowing you to transfer motion/facial expressions to any character with a single image, as well as having massive video editing/effects powers with Aleph) and Midjourney (Easy stylish/cool results, and very good video generation for animation/anime and pretty shit). Krea has some similar tools (most based on open models), and lots of cool shit too. (Those models have free trials, only Krea is recurring, the others run out, and midjourney doesn't even have free at all).
With open models you have more control over training your own AIs and making LORAs, LORAs are like DLC for your Model. You get 100-5000 pictures of something, tell it what they are, and train him on it and he can learn to do it.
Most common LORAs are either a specific character, you can use a special word to make appear, or a style, like a film director's style based on images from his work, etc..
In tensor.art you should pick a model and then add LORAs ou Embeddings (similar to Lora), and pay attention if the lora has instructions about how you should use it (keywords, configs), there are also ControlNets that are ways to guide the generation, specifically about Poses. Tensor.net allows you to generate a controlnet from a reference, meaning, copy the pose of an image and use it to generate other things in that pose, can be of a whole crowd if you want.
It's an advanced form of using image2image. In theory all models can be extended by LORAs, including video models, its all very expansive.
A cool new toy now are image editing models (in particular Flux Kontext), you can just tell it what to change in an image and it will mostly do it. "Change the background to a church", "turn the read to profile", etc... it works very well. I use it mostly on krea.
Those models can learn new tricks via LORAs as well.
That is some of it, there is a lot.