If you are not careful, LLM will mention exactly what you tell it not to

If you say "Don't mention topic X" a human adult will usually oblige you but a toddler or a large language model will often turn around and mention it immediately.

It is a surprisingly tough to keep a generative AI tool like ChatGPT from NOT doing a particular thing and often a ham-handed attempt will actually solicit the forbidden action. In this video, I give an example prompt that demonstrates this unfortunate property and hopefully also manage to give you a little prompt engineering intuition about why things work out this way.