Plenoptic Sampling

Sampling theorem for image-based rendering: how much geometrical and textural information are needed to generate a continuous representation of the plenoptic function?

Abstract

This paper studies the problem of plenoptic sampling in image-based rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering.

Plenoptic sampling goes beyond the minimum number of images needed for anti-aliased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometry-based rendering. Experimental results demonstrate the effectiveness of our approach.

Project description

In this paper, we study plenoptic sampling, or how many samples are needed for plenoptic modeling. Plenoptic sampling can be stated as:

How many samples of the plenoptic function (e.g., from a 4D light field) and how much geometrical and textural information are needed to generate a continuous representation of the plenoptic function?

Specifically, our objective in this paper is to tackle the following two problems under plenoptic sampling, with and without geometrical information:

·        Minimum sampling rate for light field rendering;

·        Minimum sampling curve in joint image and geometry space.

We formulate the sampling analysis as a high dimensional signal processing problem. In our analysis, we assume Lambertian surfaces and uniform sampling geometry or lattice for the light field. Rather attempting to obtain a closed form general solution to the 4D light field spectral analysis, we only analyze the bounds of the spectral support of the light field signals. A key analysis to be presented in this paper is that the spectral support of a light field signal is bounded by only the minimum and maximum depth, irrespective of how complicated the spectral support might be because of depth variations in the scene. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering.

Results

We mathematically derive the minimum sampling curve for image based rendering, which is described as follows:

where Nimage and Ndepth are the number of images and the number of depth layers respectively. Bvs is the highest frequency of scene texture distribution and represents the complexity of texture information. andare inversely proportional to the resolution of the capturing camera and the rendering camera respectively. 

The following figures show the minimal sampling curve for the object ”Statue” in the joint image and geometry space with accurate geometry. The quality of the rendered images along the minimal sampling curve is almost indistinguishable from that of using all images and accurate depth.  Note that sampling points in the figure have been chosen to be slightly above the minimum sampling curve due to quantization and also the number of images in the following figure means the number of sampling image along one direction.

 

A(2,32)

 

D(13,4)

 

B(4,16)

 

E(25,2)

 

C(7,8)

 

F(accurate depth,32)

With the minimal sampling curve, we can reduce the minimal number of image samples at any given number of depth layers available.  The following figure compares the rendering quality using different layers of depth and a given number of image samples. With 2×2 image samples of the Head, images (A) to (E) show the rendered images with different layers of depth at 4, 8, 10, 12 and 24 respectively. According to our minimal sampling curve equation, the minimum sampling point with 2×2 images of the Head is approximately 12 layers of depth. Noticeable visual artifacts can be observed when the number of depth is below the minimal sampling point, as shown in images (A) to (C). On the other hand, oversampling layers of depth does not improve the rendering quality, as shown in the images (D) and (E).

 

Rendered image

 

C(10,2)

 

A(4,2)

 

D(12,2)

 

B(8,2)

 

E(24,2)

With the minimal sampling curve, we can reduce the minimal number of image samples at any given number of depth layers available. For the Table scene, we find that 3 bits (or 8 layers) of depth information is sufficient for light field rendering when combined with 16×16 image samples (shown in image D of the following figures). When the number of depth layers is below the minimal sampling point, light field rendering produces noticeable artifacts, as shown in images (A) to (C).

 

Rendered image

 

C(8,8)

 

A(8,4)

 

D(8,16)

 

B(8,6)

 

E(8,32)

 

Publications

    PDF(11.6M)

Videos, slides and data

Project team

  • Jinxiang Chai  (Carnegie Mellon University)
  • Xin Tong (Microsoft Research Asia)
  • Shing-Chow Chan (University of Hong Kong)
  • Harry Shum (Microsoft Research Asia)

Related projects


Jinxiang Chai
Last Updated: May 10, 2004