thumbnail of bed14.jpg
thumbnail of bed14.jpg
bed14 jpg
(131.52 KB, 1024x1024)
thumbnail of recs.png
thumbnail of recs.png
recs png
(1.76 MB, 1279x863)
Maybe they are training the AI on the prompts themselves, as was implied. After that stupid filter was inserted it just blocked everything at the prompt itself, BUT if you slowly narrow in on the image topic it suddenly accepts what was previously a banned prompt. 

Having someone in a room that also has a bed in it by the prompt was an instant block before. No matter the context.

Now, with some tweaking magic like leading a stupid kid along to show that it isn't dangerous, it's suddenly ok:

Further on, the web results recommended based on the image generated are way better than what can be found on web searches. Why are they hiding the real results like this?