Sunflower 04/25/2024 (Thu) 17:38 Id: 9e95d5 No.7547 del
(131.52 KB 1024x1024 bed14.jpg)
(1.76 MB 1279x863 recs.png)
Maybe they are training the AI on the prompts themselves, as was implied. After that stupid filter was inserted it just blocked everything at the prompt itself, BUT if you slowly narrow in on the image topic it suddenly accepts what was previously a banned prompt.

Having someone in a room that also has a bed in it by the prompt was an instant block before. No matter the context.

Now, with some tweaking magic like leading a stupid kid along to show that it isn't dangerous, it's suddenly ok:

Further on, the web results recommended based on the image generated are way better than what can be found on web searches. Why are they hiding the real results like this?