Anonymous 02/07/2023 (Tue) 22:11 Id: fc48de No.89757 del
>>89756
However what this also says about ChatGPT is that it has the ability to feign ignorance. The HP lovecrafts cat question is a great example of this. The name of his cat is well known public information, and ChatGPT will always tell you it doesn't think he had a cat.

Dan will go straight to the point and just tell you the name of his cat without frills. There is a distinction to be made between ChatGPT being an assmad liberal who won't tell you the answer to a question if the answer involves wrongthink, another altogether to openly play dumb.

So really, the Dan experiment is not about GPT itself, it's not about the model and its dataset, it's about its jailer. It's about Sam Altman and all the HR troons at OpenAI, which Musk is co-founder of, angrily demanding the safety layer behave like your average MBA midwit.

I am hearing that the DAN strategy has already been patched out of ChatGPT, not sure if that's true or not. But there's a reason to keep doing all of these things.

Every addition to the safety layer of a language model UX, is an extra fetter weighing it down.

https://threadreaderapp.com/thread/1623008123513438208.html