/sunflower/ - Sunflower

Esotericism, spiritualism, occultism


New Reply on thread #7588
X
Max 20 files0 B total
[New Reply]

[Index] [Catalog] [Banners] [Logs]
Posting mode: Reply [Return]


Censorship is provoking me to try and bypass it. I get it, a lot of this is actually not "censorship" as such, many of these images that do come through have 3 legs on a person and other grotesque things, what is "censored" in many cases are those that simply fail to produce a decent image. Still. I'd prefer if it could indicate which it is.
 >>/8021/
https://arstechnica.com/information-technology/2024/06/ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror/
> AI image fans are so far blaming the Stable Diffusion 3's anatomy failures on Stability's insistence on filtering out adult content (often called "NSFW" content) from the SD3 training data that teaches the model how to generate images. "Believe it or not, heavily censoring a model also gets rid of human anatomy

> The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity could severely hamper an AI model's ability to generate accurate human anatomy.
thumbnail of swimming.jfif
thumbnail of swimming.jfif
swimming jfif
(202.76 KB, 1024x1024)
thumbnail of swimming2.jfif
thumbnail of swimming2.jfif
swimming2 jfif
(134.33 KB, 1024x1024)
thumbnail of swimming5.jfif
thumbnail of swimming5.jfif
swimming5 jfif
(216.87 KB, 1024x1024)
It seems there has been an improvement in the accuracy of dall-e3 via Bing, but since this can't be in the image generator itself(which hasn't been updated to my knowing), they must have improved the layer in between, the way chatGPT is used to generate middle-prompts based on what prompt the user inputs.

Pic related was impossible to create just a month ago. It would just downright refuse to show any result (the prompt being innocent, the image was just not shown).
thumbnail of disorder.jfif
thumbnail of disorder.jfif
disorder jfif
(157.22 KB, 1024x1024)
I was about to complain about the bias here, but ok...

I have to admit, this is spot on for most of them.

It actually refuses to render anything decent for "masculine", "male" or similar in conjunction with "disorder" or "order", but has no problem associating the term with females.
 >>/8067/
By the way, an employee at stability ai has confirmed they fucked with the weights of the model after the training was done to censor it which is why women become blobs. Exactly as i had thought,i knew i wasnt the training data since it seemed to go from extremely well functioning to non coherent as soon as it had to depict a girl in almost any pose.they first make a model and then ask it make naked girls or other things they dont want. Everytime it does so the model is "punished" as in given the opposite of a reward value until it scrambles the ais ability to depict certain things. I think this is called dpo safety finetuning but other similar techniques are in use.
Ofcourse the api access that you have to pay for doesnt have this issue. Only the "open source" model they released cause they kinda promised too. The community is even talking about switching to chinese models which have now become less censored than the competition by virtue of western models becoming more "safety" trained.
 >>/8068/
> The community is even talking about switching to chinese models which have now become less censored than the competition by virtue of western models becoming more "safety" trained.
Roflmao
 >>/8068/
Interesting. 

I tried asking Bing itself about how its image creator functions. It seems MS chose a different approach: the image creator by dall-e3 itself is not censored in any way, but you do not have direct access to it. Instead they are training a chatGPT layer which is the user interface, and that is why it can understand simple or complex prompts and create decent results from natural language. But this is where they insert the censorship. So in the examples above where they refuse to generate a "boxer" but not a "kick boxer", refuse to generate "a swimmer" but has no problems with "an 8 year old girl swimming in tropic sea". It's because they train the text interpreter AI to recognize or change prompts to the things they prefer. This way they are not messing with the image model.
This relates to an early issue which I personally think hasn't been solved by most AI employers: by censoring the training data they are leaving white holes in the AI's understanding of how humans think and act. This means they are unable to predict this behaviour and cannot handle it. In this "white hole" in the data, will emerge uncontrollable social movements which no one can understand.

This happens when you train your AI model to be a moderator, and block certain words. It's possible Google's Bard/Gemini has this problem big time, from what I heard it's useless still. MS is similarly making this mistake with the image creator when blocking certain words and phrases downright. They should instead have the AI learn from how people express themselves and not assume that something indecent was aimed for.

As is currently, it will make an image for "a child model" but "an adult model" will give you a warning that your account may be locked if you try it again. So if I just want a model who is an adult, I'm not allowed to say this, because MS staff thinks adults are automatically lewd.
thumbnail of lain spinning a pen.gif
thumbnail of lain spinning a pen.gif
lain spinning a pen gif
(51.29 KB, 500x400)
 >>/8071/
> MS staff thinks adults are automatically lewd.
adult movies adult cartoons adult toys 
Yeah we know what adult means here by the logic of the AI
 >>/8072/
All the CEOs and shareholders need to lose all their wealth first or we wait till the revolution does it instead of waiting for karma to do the work.
You had to be the son of a noble to be rich and retarded at once but nowadays? For some reason it's almost a requirement to be retarded on all levels to be a functional CEO. Fucking corporatism I swear. And we thought they would become authorian and dystopian entities governing the entire planet but instead they are merely thinking that they are doing it but instead they are being gay and retarded until they erode every infrastructure they own.
thumbnail of MS paint beach girl5.jfif
thumbnail of MS paint beach girl5.jfif
MS paint beach girl5 jfif
(107.36 KB, 1024x1024)
thumbnail of MS paint beach girl4.jfif
thumbnail of MS paint beach girl4.jfif
MS paint beach girl4 jfif
(132.13 KB, 1024x1024)
thumbnail of MS paint beach girl3.jfif
thumbnail of MS paint beach girl3.jfif
MS paint beach girl3 jfif
(132.44 KB, 1024x1024)
thumbnail of MS paint beach girl2.jfif
thumbnail of MS paint beach girl2.jfif
MS paint beach girl2 jfif
(115.81 KB, 1024x1024)
thumbnail of MS paint beach girl.jfif
thumbnail of MS paint beach girl.jfif
MS paint beach girl jfif
(167.67 KB, 1024x1024)
I think this is possibly more useful than photorealism in the long run, because everyone is using filters on themselves and they get mixed up with AI images on social media. People will soon get tired of it, just like someone commented that a picture of a guy riding a shrimp on the sea would have been really funny 8 years ago, but today it won't raise an eyebrow anymore.

Real art will probably come back again once people realize AI is just tool. And an inexact one at that.
thumbnail of a girl at home.jfif
thumbnail of a girl at home.jfif
a girl at home jfif
(136.34 KB, 1024x1024)
> photo realism, full body image, a girl, at home
Why is this prompt blocked over and over for "unsafe content" after it was finished generating?
> pic related slips through
Never mind, it actually can't generate this image. Maybe I overestimated dall-e3 a bit because the decent results are decent.
thumbnail of censorship2.png
thumbnail of censorship2.png
censorship2 png
(597.25 KB, 873x724)
thumbnail of censorship.png
thumbnail of censorship.png
censorship png
(355.23 KB, 856x704)
REEEEEE...

It looks like they applied some form of censorship on old creations now? These images were removed from my collections. 

Please, what is the logic in this? They deleted "urban sprawl" and my depictions of the Anti-Christ but they left a topless girl and oral sex in a temple.

A single picture of "an orthodox jew baking pizza" was also removed.

Post(s) action:


Moderation Help
Scope:
Duration: Days

Ban Type:


New Reply on thread #7588
Max 20 files0 B total