Removing Diffraction Image Artifacts in Under-Display

Camera via Dynamic Skip Connection Networks

Ruicheng Feng 1      Chongyi Li 1      Huaijin Chen 2      Shuai Li 2      Chen Change Loy 1      Jinwei Gu 2,3
1 S-Lab, Nanyang Technological University
2 Tetras.AI
3 Shanghai AI Laboratory

MouseOut: Degraded images     MouseOver: Restored results

Abstract


Recent development of Under-Display Camera (UDC) systems provides a true bezel-less and notch-free viewing experience on smartphones (and TV, laptops, tablets), while allowing images to be captured from the selfie camera embedded underneath. In a typical UDC system, the microstructure of the semi-transparent organic light-emitting diode (OLED) pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation. Often times, noise, flare, haze, and blur can be observed in UDC images. In this work, we aim to analyze and tackle the aforementioned degradation problems. We define a physics-based image formation model to better understand the degradation. In addition, we utilize one of the world’s first commodity UDC smartphone prototypes to measure the real-world Point Spread Function (PSF) of the UDC system, and provide a model-based data synthesis pipeline to generate realistically degraded images. We specially design a new domain knowledge-enabled Dynamic Skip Connection Network (DISCNet) to restore the UDC images. We demonstrate the effectiveness of our method through extensive experiments on both synthetic and real UDC data. Our physics-based image formation model and proposed DISCNet can provide foundations for further exploration in UDC image restoration, and even for general diffraction artifact removal in a broader sense.

Materials



Paper

Data

Codes

What is UDC?

As the above figure shows, a typical UDC system has the camera module placed underneath and closely attached to the semi-transparent Organic Light-Emitting Diode (OLED) display. Although the display looks partially transparent, the regions where the light can pass through, i.e. the gaps between the display pixels, are usually in the micrometer scale, which substantially diffracts the incoming light, affecting the light propagation from the scene to the sensor. In particular, the light emitted from a point light source is modulated by the OLED and camera lens, before being captured by the sensor. The UDC systems introduce a new class of complex image degradation problems, combining flare, haze, blur, and noise. On the right there is a simulated example of the image formation model with a real-captured PSF.

Dataset

We provide both the synthetic and real dataset for Under-Display Camera Images. Train and validation subsets are publicly available. Downloads are available via Google Drive or running the python code.

For synthetic data, we gather 2016 HDR patches of size [800, 800, 3] for training and 360 pairs for testing. Since these patches are reprojected and cropped from the 360-degree panorama, they may exhibit some overlap contents (but in different perspective view). Image values are ranging from [0, 500] and constructed in '.npy' form. For each of the crops, we release the ground-truth iamges and you can simulate the corresponding degraded image with calibrated PSFs by running the code.

For real data, we release 30 HDR images of size [3264, 2448, 3] using ZTE phone. Similarly, image values are ranging from [0, 16] and constructed in '.npy' form. Images are in linear domain and are not processed by the built-in ISP of the phone. We provide a simple pipeline of post-processing for better visulization. The camera output of the phone (after ISP) are also released for reference and color correction pipeline.

Method

Overview of our proposed Dynamic Skip Connection Network (DISCNet).

The main restoration branch consists of an encoder and a decoder, with feature maps propagated and transformed by DISCNet through skip connections. DISCNet applies multi-scale dynamic convolutions using generated filters conditioned on PSF kernel code and spatial information from input images.

Results


Visual results on synthetic data.


Visual results on real data.

License

The dataset is made available for academic research purpose only. All the images from synthetic dataset are collected from the Internet, and the copyright belongs to the original owners. If any of the images belongs to you and you would like it removed, kindly inform us, we will remove it from our dataset immediately. The real dataset is collected and uploaded by us. We retain all the copyrights of them.

Citation

If you find our dataset and paper useful for your research, please consider citing our work:
@inproceedings{feng2021removing,
          author = {Feng, Ruicheng and Li, Chongyi and Chen, Huaijin and Li, Shuai and Loy, Chen Change and Gu, Jinwei},
          title = {Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic Skip Connection Networks},
          booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
          year = {2021}
          }
          

Contact

If you have any question, please contact us via ruicheng002@ntu.edu.sg.