Raw Light Fields
|
Lytro (1st generation) dataset: The full dataset is described and available here for download.
When using this dataset in your research, please cite A. Mousnier, E. Vural, C. Guillemot, Partial light field tomographic reconstruction from a fixed-camera focal stack, https://arxiv.org/abs/1503.01903. March 2015. (pdf)
|
|
Lytro (Illum) dataset: The full dataset is described and available for download here
When using this dataset in your research, please cite M. Le Pendu, X. jiang, C. Guillemot, Light Field inpainting propagation via low rank matrix completion, IEEE Trans. on Image Processing, vol. 27, No. 4, pp. 1981-1993, Jan. 2018.
|
|
Lytro 1st generation and Illum dataset: A larger dataset captured by both a first generation Lytro and an Illum camera can be retrieved here.
When using this dataset in your research, we will be happy if you cite X. jiang, M. Le Pendu, R. Farrugia, C. Guillemot, Light Field Compression with Homography-based Low Rank Approximation, special issue on Light Field Image Processing of the IEEE J. on Selected Topics in Signal Processing, IEEE J-STSP, vol. 11, No. 7, pp. 1132-1145, Oct. 2017.
|
|
Raytrix (R8) dataset: Ligh field video dataset captured by a R8 Raytrix camera (with disparity maps).
When using this dataset in your research, we will appreciate if you cite L. Guillo, X. jiang, G. Lafruit, C. Guillemot, Light field video dataset captured by a R8 Raytrix camera (with disparity maps), ISO /IEC JTC1/SC29/WG11 MPEG2018/m42468, ISO/IEC JTC1/SC29/WG1 JPEG2018/m79046, INTERNATIONAL ORGANISATION FOR STANDARDISATION,ISO/IEC JTC1/SC29/WG1 & WG11, April 2018, San Diego, CA, US.(pdf)
|
|
Synthetic video light fields based on the Sintel movie modified in the Blender 3D software in order to render an array of 3x3 views. We have modified the
scenes to generate not only the final render, but also a clean render
without lighting effects, motion blur, or semi-transparent objects. This dataset is composed of two synthetic light fields (Bamboo2 and Temple1) of 3 x 3
views of 1024 x 536 pixels and 50 frames. The light field views are provided with the corresponding ground truth scene flow (optical flow and disparity variation).
When using this dataset in your research, we will appreciate if you cite P. David, M. Le Pendu, C. Guillemot, Sparse to dense scene flow estimation from light fields, submitted, ICIP 2019.(pdf)
|
|
Inria synthetic light field datasets are synthetic light field datasets rendered with Blender 3D software. They contain a densely sampled light field dataset and a sparsely sampled one. The light fields in both datasets are of spatial resolution 512 x 512 and angular resolution 9 x 9. We offered a sub-aperture image (png format) and a disparity map (npy/mat format) for each viewpoint in light field. When using our datasets, we will appreciate it if you can cite our article: J. Shi, X. Jiang, C. Guillemot, A framework for learning depth from a flexible subset of dense and sparse light field views. TIP, July 2019.(pdf)
|
The raw Lytro light fields data can be decoded with the matlab light field toolbox. The decoding process includes de-vignetting,
color de-mosaicking, conversion of the hexagonal to a rectangular sampling grid, colour correction, and extraction of the sub-aperture images. Matlab functions implementing improved demosaicing and alignment methods can be downloaded below.
Copyright
All datasets and benchmarks on this page are copyright by us and published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.
Software
Decoding Raw Light Fields Captured by Plenoptic Cameras
A modified version of the D. Dansereau's Light Field Toolbox can be downloaded here.
This version is called v0.4-CLIM as it originates from D.Dansereau's LF toolbox v0.4. The following features have been added:
- Alternative calibration using existing mlaCalib information from LYTRO metadata.
- Automatic White Balancing.
- Barycentric interpolation (method from the KAIST light field decoding pipeline).
- Hot pixel correction (method from the KAIST light field decoding pipeline).
- White lenslet image guided demosaicing and alignment method described here (P. David, M. Le Pendu, C. Guillemot, Lenslet white image guided demosaicing for plenoptic cameras", IEEE Multimedia Signal Processing (MMSP) workshop, Oct. 2017). All documents and papers that use or edit this code must cite the above paper.
More details on this version can be found
here.
BM PCA-RR Light Field Super-Resolution
The code can be downloaded
here.
This code implements the method described
here (R.A. Farrugia, C. Galea, C. Guillemot, "Super Resolution of Light Field Images using Linear Subspace Projection of Patch-Volumes,," IEEE Journal on Selected Topics in Signal Processing, IEEE J-STSP, vol. 11, No. 7, pp. 1058-1071, Oct. 2017).
This code loads a light field, learns the linear projection needed to restore each patch volume. For now only the spatial super resolution is provided as part of the demo. This demo allows researchers to compute the results presented in the original paper. All documents and papers that use or edit this code must cite the above paper.
HLRA: Homography-based Low Rank Approximation
The code can be downloaded
here. It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (X. Jiang, M. Le Pendu, R.A. Farrugia, C. Guillemot, "Light Field Compression with Homography-based Low Rank Approximation," IEEE Journal on Selected Topics in Signal Processing, IEEE J-STSP, vol. 11, No. 7, pp. 1132-1145, Oct. 2017.).
LF-GBR: Light field graph-based representation
The code be downloaded
here. It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (X. Su, M. Rizkallah, T. Maugey, C. Guillemot, "Graph-based light fields representation and coding using geometry information", IEEE International Conference on Image Processing (ICIP), Beijing, 17-20 Sept. 2017).
Geometry-Aware-GBT: Geometry aware Graph-based transforms for light field compression *
The code be downloaded
here. It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (M. Rizkallah, X. Su, T. Maugey, C. Guillemot, Geometry-Aware Graph Transforms for Light Field Compact Representation, IEEE Trans. on Image Processing, vol. 29, pp.602-616, 29 July 2019.).
LF-LRMC: Light fields Inpainting via low rank matrix completion
The code can be downloaded
here. It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (M. Le Pendu, X. Jiang, C. Guillemot, Light Field inpainting propagation via low rank matrix completion, IEEE Trans. on Image Processing, vol. 27, No. 4, pp. 1981-1993, Jan. 2018.)
FastLFInpaint: Fast light fields Inpainting via EPI diffusion *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (P. Allain, L. Guillo, C. Guillemot, Fast light field inpainting using angular warping with a color-guided disparity interpolation, Advanced Concepts for Intelligent Vision Systems, ACIVS, Poitiers, 24-27 Sept. 2018..)
LFDE-OCC: Scene depth with occlusion handling from a sparse set of light field views
The code can be downloaded
here. It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (X. Jiang, M. Le Pendu, C. Guillemot, Depth estimation with occlusion handling from a sparse set of light field views, IEEE International Conf. on Image Processing, 2018.)
4DLFDE: A Learning based Depth Estimation Framework for 4D Densely and
Sparsely Sampled Light Fields *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (X. Jiang, J. Shi, C. Guillemot, A learning based depth estimation framework for 4D densely and sparsely sampled light fields, IEEE International Conf. on Acoustic, Speech and Signal Processing, ICASSP, 2019.)
LFDE-FLEX: A Framework for Learning Depth from a Flexible Subset of Dense and Sparse Light Field Views *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (J. Shi, X. Jiang, C. Guillemot,
"A Framework for Learning Based Depth from a Flexible Subset of Dense and Sparse Light Field Views", TIP, July, 2019.)
LFVS: Light Field View Synthesis *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (J. Shi, X. Jiang, C. Guillemot, Learning Fused Pixel and Feature-based View Reconstructions for Light Fields, CVPR 2020.)
Lightweight Neural Network for Monocular View Generation with Occlusion Handling *.
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (S. Evain, C. Guillemot, A Lightweight Neural Network for Monocular View Generation with Occlusion Handling, IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), Volume: 43, Issue: 6, June 1 202, pp. 1832-1844, Dec. 2019.)
FDL-Compress: Light Field Compression using Fourier Disparity Layers
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method in E, Dib, M. Le Pendu, C. Guillemot,
"Light Field Compression using Fourier Disparity Layers", IEEE-ICIP, Sept. 2019.
LF-Slomo: LF video temporal interpolation method *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here (P. David, M. Le Pendu, C. Guillemot, Local 4D Affine Model for Scene Flow Estimation from Sparse Light Fields, IEEE Trans. on computational imaging, vol. 6, pp.791-805, March. 2020.)
FPFR+: LF view synthesis and video temporal interpolation method *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here
(J. Shi, X. Jiang, C. Guillemot,
Deep Residual Architecture Using Pixel and Feature Cues for View Synthesis, IEEE Trans. on computational imaging, accepted, Feb. 2022).
DeepLFCam: Deep Light Field Acquisition Using Learned Coded Mask Distributions for Color Filter Array Sensors *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here
(G. Le Guludec, E. Miandji, C. Guillemot, Deep Light Field Acquisition Using Learned Coded Mask Distributions for Color Filter Array Sensors, IEEE Trans. on computational imaging, vol. 7, pp. 475-488, 2021, May 2021.).
DeepUnLFCam: Deep Unrolling for Light Field Compressed Acquisition using Coded Masks *
The code can be downloaded
here.
It is released as open-source under an Affero
licence GPLv3. This code implements the method described
here
(G. Le Guludec, C. Guillemot, Deep Unrolling for Light Field Compressed Acquisition using Coded Masks, IEEE Access, Apr. 2022.).