Generative Quanta Color Imaging.

Elmore Family School of Electrical and Computer Engineering

Purdue University, USA

Abstract

The astonishing development of single-photon cameras has created an unprecedented opportunity for scientific and industrial imaging. However, the high data throughput generated by these 1-bit sensors creates a significant bottleneck for low-power applications. In this paper, we explore the possibility of generating a color image from a single binary frame of a single-photon camera. We evidently find this problem being particularly difficult to standard colorization approaches due to the substantial degree of exposure variation. The core innovation of our paper is an exposure synthesis model framed under a neural ordinary differential equation (NeuralODE) that allows us to generate a continuum of exposures from a single observation. This innovation ensures consistent exposure in binary images that colorizers take on, resulting in notably enhanced colorization. We demonstrate applications of the method in single-image and burst colorization and show superior generative performance over baselines.

Colorization Challenge under Varying Exposure Conditions

Descriptive text about the image
Illustration of exposure correction and colorization of binary images using neural networks. (a) depicts a range of images from overexposed to underexposed, illustrating the degradation of image details due to exposure variation. (b) contrasts the standard colorization workflow and our proposed approach. (i) In standard colorization approaches, a neural network learns to map a binary image \(\mathbf{Y}\) to the corresponding color image \(\mathbf{X}_{c}\) via a neural network, \(\mathcal{F}^{\text{aug}}\), where superscript `aug' indicates the colorizer is trained using dataset with augmented exposure images. (ii) In contrast, our approach does not require training colorizer with augmented exposure images. (c) compares the colorization results: the first row is the output of a colorizer trained without augmentation, the second row is the output of colorizer trained with augmented data, the third row corresponds to the results of our method and the last provides the ground truth images for reference.

Methodology

Descriptive text about the image
An illustration of our proposed method for exposure adaptive colorization: A binary image, \(\mathbf{Y}\), which can be overexposed or underexposed, is input into the proposed exposure synthesis module. The colorization of binary image can be achieved using Single Image Colorization (SIC) or Burst Image Colorization (BIC). (i) SIC: Based on the input and target exposure levels, \(\widetilde\theta_\text{input}\) and \(\widetilde\theta_\text{target}\), this module adjusts the weights of a exposure synthesis network \(\mathcal{G}\), which then generates an exposure-corrected image. Note that corrected image is not necessarily a binary image. Since the colorization module \(\mathcal{F}\) is trained to colorize only image of specific exposure, the exposure synthesis module ensures corrected binary image has similar exposure to the one on which \(\mathcal{F}\) is trained. (\romannumeral 2) BIC: For BIC we generate images with varying exposures as input to the burst image colorization network, \(\mathcal{F}_{\text{burst}}\). The trained network is able to exploit the complementary information across multiple exposures with the help of Cross Non-Local Fusion blocks to synthesize colors in regions of the image that is otherwise not possible by SIC approach.

BibTeX

@inproceedings{purohitquantacolor2024,
      author = {Vishal Purohit, Junjie Luo, Yiheng Chi, Qi Guo, Stanley H. Chan, Qiang Qiu},
      title  = {Generative Quanta Color Imaging},
      year   = {2024},
      booktitle = {CVPR},
}