This relates to an early issue which I personally think hasn't been solved by most AI employers: by censoring the training data they are leaving white holes in the AI's understanding of how humans think and act. This means they are unable to predict this behaviour and cannot handle it. In this "white hole" in the data, will emerge uncontrollable social movements which no one can understand.

This happens when you train your AI model to be a moderator, and block certain words. It's possible Google's Bard/Gemini has this problem big time, from what I heard it's useless still. MS is similarly making this mistake with the image creator when blocking certain words and phrases downright. They should instead have the AI learn from how people express themselves and not assume that something indecent was aimed for.

As is currently, it will make an image for "a child model" but "an adult model" will give you a warning that your account may be locked if you try it again. So if I just want a model who is an adult, I'm not allowed to say this, because MS staff thinks adults are automatically lewd.