The allocated resources will be used for two projects. In the first project, the aim is to train various neural networks to predict the binding of small peptides to the protein survivin. I aim to investigate different architectures of neural networks and elucidate what factors influence the binding. Currently, I am thinking of separating long-range effects in the sequence from those of more local effects. Network architectures with these two effects combined will also be investigated and the mechanism to best combine them. Statistics from all the combinations of architectures will be reported in a paper. The architectures will be optimized, and multiple runs will be needed in order to infer the whole distribution of outputs from the neural networks. This will require rather large computational resources, and that is why I am searching cpu time here.
In the second project, I will optimize the detector geometry from an experiment conducted at the femtomax beamline. We are pumping proteins with THz radiation; since we are the only persons in the world conducting these experiments, the beamline has to be rebuilt for our purposes. Unfortunately, the beamline scientist can't deliver detector geometry with good enough accuracy for our purposes. Then the need to optimize this manually. There are; however, nine parameters, out of which 5 are correlated, and the objective function of the optimization is sensitive to changes up to 8 decimal places. Due to the correlation of parameters, there is a need to go through all combinations of them in grid searches. I have restricted the number of combinations a bit by traversing lines which gives combinations of parameters.
Also, the computation time will most likely be used for additional machine learning projects applied in biophysics, implementing ideas that may pop up during the time of one year.