In this project, we examine the robustness and stability of NeRF (Neural Radiance Field) method’s acoustic equivalent, the so-called NAF (Neural Acoustic Field). This method works by representing a field (visual or acoustic) by training the NN on scattered data points in a room. In the case of a NeRF, this works by taking pictures of a scene from different angles, training a NN on those pictures and the angles the pictures were taken at, and later inferring how the scene would look from a different set of angles. NeRF was originally introduced in 2020 for image processing, and was later extended to acoustic fields under the name of Neural Acoustic Fields. NAFs are less well-known and more research regarding the practicality and robustness of the method is required. Most interestingly, its resilience and stability against noise and corrupted data. We therefore want to examine how well these methods handle environments with disturbances, such as moving objects or sound pollution. Significant compute power is required since, for the time being, each acoustic scene and case requires retraining the NN. The final outcome of the project is to understand how well do NAFs perform to unseen acoustic scense. This is a BSc thesis project at the Marcus Wallenberg Laboratory for Sound and Vibration Research, KTH, supervised by Asst. Prof. Elias Zea.