Do you ever fill a closet with freshly poured glasses of beer? I don't think many people do, but it may be impossible to find an image for your generative AI widget's training data that embodies a particular strange thing not happening.
In this video, I examine an example of an artificially generated image that went a little bit wrong in an interesting and subtle way. I use it to explore some themes around prompt engineering, notably that it is problematically difficult at times to give negative (DON'T do this) instructions and also that it is important to remember the ways in which artificial intelligence does not really understand what it is doing.