We are interested in the scenario where we have access to medical images along with radiologist reports containing observations. We aim to use these reports to learn better representations of the images. We are also interested in relating local features of the images to specific words in the text. MIMIC-CXR (https://physionet.org/content/mimic-cxr/2.0.0/), among others, offers data in this form. These are publicly released images intended for general research and should not be considered sensitive. We intend to use this storage for these images (along with the much smaller text files).