A local light field (LF) surrounding an image allows to render novel views from different viewpoints, while preserving depth cues like head motion parallax. Additionally LF rendering with small baseline allows for digital refocusing. However capturing of such light fields requires a multi-camera array. A dense NxN light field will require an NxN camera array (thus capturing NxN images of size HxW), which contributes to slower rendering time and larger bandwidth requirements. To overcome the issues, recent advancements in “Light Field Reconstruction” has allowed reconstructing a NxN light field given only one or few images as input. However for full user immersion, a wider field of view is required which cannot be achieved by planar cameras used in current light field rendering. Thus we extend the approach of light field rendering for planar to spherical. An omnidirectional image (also known as spherical or 360 image) provides 3 Degrees of Freedom (DoF). By
reconstructing the surrounding spheres near this single omnidirectional image, the 3 DoF can be extended to 6 DoF, thus allowing the user to not only perform the head
movement but also body movements. To achieve the spherical light field reconstruction, our
approach will require rendering the synthetic dataset for training and testing the deep learning models. Since traditional depth image based rendering is slower for omnidirectional images due to their high distortion, our approach will rely on representing the scene features
implicitly using a sphere positional aware network.