Self-supervised monocular depth estimation methods suffer occlusion fading, which is a result of a lack of supervision by the ground truth pixels. A recent work introduced a post-processing method to reduce occlusion fading; however, the results have a severe halo effect. This work proposes a novel edge-guided post-processing method that reduces occlusion fading for self-supervised monocular depth estimation. We also introduce Atrous Spatial Pyramid Pooling with Forward-Path (ASPPF) into the network to reduce computational costs and improve inference performance. The proposed ASPPF-based network is lighter, faster, and better than current depth estimation networks. Our light-weight network only needs 7.6 million parameters and can achieve up to 67 frames per second for 256×512 inputs using a single nVIDIA GTX1080 GPU. The proposed network also outperforms the current state-of-the-art methods on the KITTI benchmark. The ASPPF-based network and edge-guided post-processing produces better results, both quantitatively and qualitatively than the competitors.