Under severe weather conditions, the quality of the images taken outside is directly affected by floating atmospheric particles. To keep the quality of the images, haze removal methods play a critical role. The most difficult part of haze removal is removing the haze that spreads over the entire image. Many CNN-based methods have been proposed to remove the haze, and can be divided into two types. One is to use a multi-scale structure and the other is to stack layers. The former causes image degradation due to the loss of some of the original information in an image and the latter increases computational complexity due to not reducing the resolution. In addition, a large number of parameters is required to secure the expressive power of the model, which leads to a huge amount of memory. To tackle these problems, we tried to 1) downsample the image while saving parameters and maintaining the quality of the generated image, and 2) consider the information in the entire image to remove the haze. For the first problem, we tried to solve this by using a feature extractor that has been used in other tasks, learning to optimize the output image in low-resolution, and preparing kernels with various dilation rates to expand the receptive fields. For the second problem, we use the attention structure to determine which part of the image features should be focused on from the entire feature map. By incorporating such modules, our method achieves better results on both synthetic and real-world images when compared with state-of-the-art methods.
ASJC Scopus subject areas
- コンピュータ サイエンス（全般）