McCarthy and others have demonstrated that human language tends towards brevity in collaborative 2-player communication games. This is enabled through the introduction of abstractions which enable more concise utterances. In this prior work both players are humans. We note that we are increasingly seeing LLM-based applications (i.e. ChatGPT) in the place of one of these players in everyday activities, where a person and a conversational agent interact back and forth. Do they follow the same conversational dynamics which we observe in humans? Are they able to understand abstractions? How about introduce abstractions? It seems like this is a timely question and one that would be of much interest to the research community.