We are creating a model using a cGAN architecture to stain cells virtually based on their viability. This will allow us to simplify laboratory work since no chemical staining is required. It will also allow for dynamic studies on the same cell population and possibly rescuing cells if changing the extra cellular environment early in the apoptotis process.
We will use the resource for training the cGAN network using ground truth data in order for us to apply it on phase contrast microscope images, i.e., chemically unstained cells. The ground truth data is constituted by fluorescent microscopy images of chemically stained cells.
The software system is constructed utilizing the Python architecture with the aid of Kira (DeepTrack2.0, established by my colleague Giovanni Volpe), and is a comprehensive deep learning framework designed for digital microscopy. The generation of virtually-stained live or apoptotic cells in the virtual staining project is dependent on a method that employs Tensorflow and conditional generative adversarial neural networks (cGANs).
Moreover, the counting of cells will be carried out through the use of the self-supervised single-shot deep-learning technique, LodeSTAR or alternatively, MAGIK, both of which are extensions of DeepTrack. However, the training of these techniques necessitates the use of GPUs, which are not accessible at the departmental level.