Embodiment plays a significant role in learning in biological systems. Similarly, the embodiment is believed to be important for AI systems to enable cognitive agents to actively acquire knowledge and skills through interaction with their surrounding environment. Embodied AI requires tools, algorithms, and techniques to cope with real-world challenges including but not limited to uncertainty, physical constraints, scarce data, and high variability. A key open problem is understanding how embodied agents can efficiently learn to solve complex tasks and adapt to dynamic environments. Reinforcement Learning (RL) has achieved notable success in the field of robotic locomotion and manipulation. By leveraging RL algorithms, embodied agents can learn through trial and error, continually refining their actions based on feedback from their environment. This iterative process enables the embodied agent to learn novel skills overtime to obtain a goal. However, despite its widespread adoption, RL faces significant limitations, particularly in exploration efficiency and long-horizon tasks, and often relies on large amounts of training data and external feedback. It is anticipated that curiosity-driven learning will be essential for making progress towards fully autonomous embodied agents. In this project, we aim to develop novel algorithms that allow embodied agents to learn to interaction with the environment effectively through intrinsic motivation, reducing dependence on extensive external supervision, and improving autonomous learning capabilities.