thumbnail of 1725517256648428.webm
thumbnail of 1725517256648428.webm
1725517256648428 webm
(710.45 KB, 1280x720 vp9)
thumbnail of 1725528559860638.webm
thumbnail of 1725528559860638.webm
1725528559860638 webm
(969.1 KB, 1280x720 vp9)
thumbnail of 1725534258356327.webm
thumbnail of 1725534258356327.webm
1725534258356327 webm
(3.8 MB, 1280x634 vp8)
 >>/94609/
> What prompts are used?
No clue, I'm only a carrier.
There appear to still be some censor filters, so the usual methods to bypass the filters may be needed. That is, using synonyms. Not "blood" but "tomato sauce", not "indians and shit" but "indians and clumpy chocolate".
Since the censor filters are always single word blacklist based, and the filtering is applied on the front end of the prompt, and only rarely the filtering is trained in the models themselves, so using "safe" words carrying the same meaning or the same visual appearance often allows to skip the filters.
> Does English work to generate them?
It seems to work, but some say using chinese text improves the results.