This paper presents a practical framework for occlusion-aware augmented reality application using visual-inertial RGB-D SLAM. First, an efficient visual SLAM framework with map merging based relocalization is introduced. When the pose estimation fails, a new environment map is generated. Then, a map merging is performed to merge the current and previous environment maps if a loop closure is detected. The framework is then integrated with the inertial information to solve the missing environment map problem. Camera pose is approximated using the angular velocity and translational acceleration value when the pose estimation fails. Experimental results show that the proposed method can perform well in the presence of missing pose. Finally, an occlusion-aware augmented reality application is built over the SLAM framework.