Artificial intelligence models that require huge and diverse datasets to train, like ChatGPT for example, seem poised to place their creators between the proverbial rock-and-a-hard-place regarding a couple of important legal issues. On the one hand, it is very imaginable sooner or later someone sues because of bad AI advice, and in that situation it would be great to invoke something like Section 230 and claim the fault lies with the training data you got from someone else over the internet. But then... people have already sued because they feel like their intellectual property was infringed by the training process, and in this situation the OpenAI's and Google's of the world would love to claim the AI's output is original work that they own. These are unpleasantly contradictory stances...